'Automated' Medical Blogs (Written by ChatGPT)

This is a 'job offer' currently appearing on UpWork:

Writer With Experience in Health/Genomics to Write Blog Using Chat-GPT.
Looking for blog writers with any experience in healthcare, genomics, bioinformatics, informatics to write blog articles. I actually want the individual to leverage chat-gpt so we can pump out lots of articles. Writer will be required to just edit/slightly modify blog posts. Will pay per blog post.

$5.00

Fixed-price

So this is what we have become, as a species?

I guess all I can do at this point is implore genuinely interested/intelligent members of my species to ignore 'blogs' from now on, and only trust what they read in peer-reviewed medical journals.

Comments

  • +4

    I found ChatGPT (well MS Bing) to be at least reasonable as a starting point if you are looking at a specific topic. It's kinda like reading an Wikipedia article but has bit more to it I guess, because you can specify what you want to ask it about. I found it fascinating that it was able to identify multiple articles, and combine those to give me coherent information. I wouldn't trust it at face value, but, something to go, there are things that I can look into.

    Blog articles and peer review articles have different target audience to begin with. If it is for general public and if it is reviewed by someone professional before being published, why not?

    • I use it quite a lot at work, particularly for rewriting what I already wrote into something that sounds better, or as you say a starting point on where to write.

      Considering the quality of most content farms online, it won't be any worse quality. It'll also have a lot fewer spelling mistakes.

  • +3

    Is ChatGBT anything like ChatGPT?

    • +4

      It missing the L & Q.

  • +1

    $5 per 2,000 words? This job surely wasn't offered to an Australian.

    • Or a medical professional

  • +4

    members of my species to ignore 'blogs' from now on

    I’ve been ignoring them for a long time.

  • -3

    Peer review scientific/medical literature = left wing of chicken
    ChatGPT authoring medical blog articles = right wing of chicken

    Welcome to the rise of the real time rewriting of all history to remove anything outside of the corporate engineered consensus as "misinformation".

  • -3

    "only trust what they read in peer-reviewed medical journals."

    Yeah nah, seen plenty of dodgy stuff in those things over the years, and they are also sponsored by pharma so I take EVERYTHING with a grain of salt until I prove it for my self in some way at least. I suspect honest science (at least in some areas) died a long long time ago, sacrificed on the alters of profit and control.

    • +1

      "this so-called peer reviewed medical science journal hasn't been proven to my satisfaction, time to Do My Own Research on Facebook" etc

      I thought all that cooker stuff was like crypto, are people still actually doing this?

      • -5

        Well you used the word 'cooker' so that disqualifies you from a number of responses. Also, I don't have FB nor twitter. There are plenty of medically/scientifically qualified people out there who are able to provide objective opinions on things. There is also the crisis of science to be contended with and the huge number of conflicting 'peer reviewed' studies available in journals or pubmed etc. Your automatic reaction of shooting the messenger rather than adress the message is also a big tell. But if your world-view requires you to believe that anyone who questions the establishment approved narrative and belief structures simply must be a Trump supporting, RWNJ of some description then there's not much point in continuing the conversation until you are ready to take your blinders off.

        • +1

          Not sure what level of magical thinking you need to unlock to access "haha you called me a cooker that means your attempts to discredit my conspiracy theories are invalid" but cook on, you crazy diamond

  • +1

    "so we can pump out lots of articles" just sounds like they won't to run a content farm.

    This is the type of shit that creates cookers.

  • Species 8472?

  • +2

    I, for one, welcome our new AI overlords.

  • Now it brings up the story of that lawyer in the US who asked ChatGPT and it gave him 6 cases that didn't exist. I just don't think people believe what LLM actually mean. It isn't even AI.

    Article from Forbes

    What really scares me is people believing LLMs are AI and it will do everything. Maybe the government should ask it to solve social housing problem. Then how to solve the labor problem with finding enough trades people. I can't believe like 2.5m public sector workers and highly paid consultants can't use ChatGPT. Crazy isn't it.

  • +1

    Did anyone really ask for all this mass produced AI crap? I saw the Sora stuff from OpenAI the day it was announced and I thought to myself "Who wants this?"

    • -1

      Did anyone ask for acres & acres of wind farms & solar panels? Or how about the baiting of bees & their forced destruction? How about the plethora of government plans in the works, one of which is called "managed retreat"? Did anyone ask for the genetic augmenting of crops, animals & now humans?

      No, no one did, but those in power & their class of minions have been pushing for & getting their way for decades, if not centuries.

      • No, no one did, but those in power & their class of minions have been pushing for & getting their way for decades, if not centuries.

        You know, growing up on sci-fi I was convinced that the AI apocalypse was at least going to be cool and big-budget. The Matrix/Terminator-esque stuff, like the last remnants of humanity having pointy-shooty laser battles with our impossibly-coordinated robotic overlords before they inevitably win, and then trap us in Lotus-eater machines while they burn our souls as biofuel. That sort of thing. Sure, it'd still be horrific suffering, but at least it would be cinematic.

        Alas, reality is always a letdown. Forget glorious-though-futile battles against robotic killing machines, now there's a nonzero chance we all just perish from thirst because some corporate fatcat decided that 5 more minutes on the chatbot was a better use of water than sustaining a human life was.

        • +1

          Here's some propaganda for you.

          https://www.csiro.au/en/research/natural-environment/natural…

          For people to understand what is being done by the corporations - using governments as their instrument - to nature & human beings one has to read that page & understand how language is being weaponised against us. "Natural asset accounting" is going to benefit the super rich, but the language used is one of benefit to humanity.

          https://www.impactag.com.au/natural-capital/

          Lol. "Monetizing" natural capital for our "clients".

          This is the future the parasitic overlords are pushing for.

  • -3

    It's catering to the dumbest of the dumb dumb.
    As long as it's not "mainstream", it's 10000% trustworthy.

    • As long as it's not "mainstream", it's 10000% trustworthy.

      The second part does not logically follow the first part.

      Talk about dumb.

      • -3

        Most dumb people don't get sarcasm.

        • Lol. It actually applies to the so called consensus people like yourself believe in.

          • -2

            @mrdean: "mainstream/consensus = not factual"

            Do you agree or disagree with that statement?

            Another interesting statement I came across some time ago, something along the lines of:

            "Some individuals believe in randoms who tell them what they want to hear (conspiracy theories, that they're a victim
            of the Gov etc) , as a means to explain or blame whatever mental issues or life issues they have experienced"

            Do you agree or disagree with that statement?

  • -1

    Do you agree or disagree with that statement?

    1. "may be misleading". And it depends on the meaning & use of the word "factual". Governments & corporations have gone to great lengths to change the meaning of words. One example of many is the meaning attributed to the word "placebo", especially in the context of clinical trial usage. The public generally understands placebo to mean a 'sugar pill" or "saline", but in the other context it does not mean that at all. In the clinical trial context it can actually mean another "vaccine" without the antigen. It is not just misleading, but deception.

    2. "propaganda". It is used just as one tool of many to discredit people who have questions & a different way of seeing things by charging them with a "mental issue".

    • -1
      1. the clinical trial context it can actually mean another "vaccine" without the antigen. It is not just misleading, but deception.

      It's the antigen that causes a response… Why wouldn't you test antigen. What's so confusing….?

      2

      Well the person called himherself/admitted that heshe had mental issues for a long time. Therapist didn't help, so went on to conspiracy theories. Whatever makes them feel normal I suppose, till they try to convince others to live in their delusional bubble.
      Don't be so quick to blame others.

      • It's the antigen that causes a response… Why wouldn't you test antigen. What's so confusing….?

        Lol, this is the level of intelligence (or brainwashing for the conspiracy minded). People can't even see the issue. Good job education system!! In simple basic terms, the "confusing" part is the excipients & adjuvants in this particular example, which of course has been superficially "debunked" by trustworthy "authoritative" sources.

        • -1

          Your wanna expand on that and link some sources/data to back up your claim? Don't give me YouTube or some randoms blog.

          I'm curious, what educational and professional background do you have?

          It's great that the mentally ill person found some purpose in life, albeit pursuing someone else's agenda. How can they be helped?

          • +1

            @Ughhh:

            Your wanna expand on that and link

            Look Champ, I have zero interest in proving to you what you obviously either "can't" see or "refuse" to see, because I know from very careful study of how so called "skeptics", "debunkers", "fact-checkers" work, that whatever I cite will be discredited, no matter how absurdly.

            If you really want to dig into the example I cite (because apparently common sense isn't enough for you), then I suggest you look up the recent exchange between Paul Offit & Aaron Siri Esq on the subject of "placebos". Both have SubStacks & the arguments they both lay out are there for all to see. Yeah, check out those "randoms" blogs!! Hahaha.

  • What happens when chatgpt starts digesting blogs on the internet written by chatgpt. Things will degenerate quickly

    • Yo WanGBB

      I fed your question into ChatGPT, and it spat this out in about 3 seconds:

      As of my last update in January 2022, I don't have the ability to access real-time data or specific information about my own interactions or any subsequent developments. However, I can provide some insights into the hypothetical scenario you mentioned.

      If a model like ChatGPT were to generate content that is then used as input for further training, it could potentially lead to a reinforcement loop. The model might start to replicate and amplify certain patterns present in the training data, which could include biases, inaccuracies, or idiosyncrasies present in the original content.

      This is known as "circular reasoning" or "feedback loop," where the model reinforces its existing knowledge without external validation. It could result in the model becoming more confident in its own biases or generating content that aligns closely with the style and patterns it has previously encountered.

      To mitigate this, diverse and high-quality training data, along with careful curation and oversight, are crucial. It's essential to ensure that the model is exposed to a broad range of perspectives and information to prevent it from overfitting to specific patterns present in its own outputs.

      It's important to note that any updates or changes to the behavior of models like ChatGPT would depend on the training process implemented by its developers and the measures taken to address potential issues like bias and circular reasoning.

Login or Join to leave a comment