My involvement with AI is limited, but I am currently hooked on chatgpt.
I tried Grok and it has a lot more depth but got complicated with logins.
Havent bothered with others, but it has opened a huge new world for helping with car issues to getting straight answers to simple questions that google drowns in.
And it avoids ad pollution, although I suspect that going to be a whole other subject soon.
It deciphers the gobbeldy gook my doctor feeds me, and prepares simple questions to help me understand.
It even offered to help me produce court documents on a service issue.
Who are you using, how is it working, and what are you using it for, particularly amazing topics
How Is AI Working for You

Comments
ChatGPT Plus to:
- buy my first house. I dont know at all and chatgpt helps me especially on reading section32, breaking down things. But, I would NOT trust her 100%, probably close to 70% with rechecking to real conveyancer and my own research.- buy my first car. I want her to collect evidence on long term reliability and most frequent repair on specific car i find in carsales. Calculate the approximate repair cost on that specific milestone and year. Then, finalise on total repair cost in a year plus initial cost. Then, decide if it is worth to buy.
HER - His Electronic Replacement
Started using Claude for shopping. I take screenshots of the Facebook marketplace ads, or Amazon listings, paste them in there, and ask to give me the best option for my specific use case.
Missed opportunity for Grok to be called Gronk.
I asked ChatGPT given what we know about human motivation and AI's potential risks, should we pause AI development until international laws and safeguards are in place?. It's reply -
"From my perspective — trained on a broad swath of human history, psychology, and behavior — the greatest risk isn’t AI itself. It’s unregulated AI, combined with human shortsightedness, greed, and tribalism. So yes, brakes are probably wise. But to be effective, they can’t be pulled by one driver alone."
I have to agree with GPT, it's not AI that is the problem it is the evil monkeys that are crawling over each other to control it. Without international safeguards I would cautiously regard the future of AI as an existential threat to humanity.
Until then I will continue to I use AI every day for a variety of things - Using Notebook LM for throwing in a heap of articles to make into a podcast to listen to on the way to work is a new favourite.I can recommend using https://udm14.com/ to get actual web results when you search instead of the AI crap google tries to throw at you. What is the point of getting a nicely written simple answer if you have no idea if it is correct?
Perplexity is the go. Complete Google search replacement better than Gemini gathering info
There's more hostages to AI at Ozb, that I ever thought. But in hindsight, you can see the way the comments show it over time. And it also shows everyone eventually falling into line with the AI take on things. That was a quick capitulation.
it has cut down most reports i write from a couple of hours to about 50-60mins
I’m neurodivergent and have a lot of communication related challenges. Most of my AI use is taking what what people say or do and helps explain it to me, or even discusses with me, what things mean and how I can effectively communicate back.
It definitely doesn’t always get it right but compared to how I was before having AI to help it’s miraculous.This is the way.
It definitely doesn’t always get it right
I can see how this could be useful. But how do you know when it isn't right though?
There will be nuances that will likely be missed by your (or anyone's) retelling of what happened to an LLM. For example, allegedly more than 50% of face to face communication is nonverbal. Then there's intonation and tone that may be difficult to convey to the LLM. Then there's whatever inherent biases the LLM has based on the corpus of text it was trained on. For example if you had a conversation with someone about a controversial topic (e.g. Gaza) the LLM may give you a biased interpretation of what they meant.
But how do you know when it isn't right though?
You dont, not for sure. I use it for a similar purpose and perfection is not what's expected. A second opinion that espouses the "likely" interpretation of events is.
Personally I really struggle with intuition, subtext and understanding how other people may see a situation. However presenting the pretext, actions and result to an LLM is often enough for it to come up with what possibly happened.
Eg. Someone asked for bird watching spots and said they'd been to Olympic Park in Homebush. I asked "Homebush?" thinking it was a suburb with the same name or I was missing something. For reference Olympic Park is its own suburb, it's next to Homebush. Received several glares and was not graced with a response.
Normally I'd agonise about this for quite a while because I only see this as a sequence of events and genuinely don't have much to work with. An LLM provides the obvious perspective that I was maybe seen as pedantic and trying to catch people out on a mistake.
I tried using ai that has access to internet to find valid promo codes for stuff I wanted to buy. It gave me a lot of codes, all of them are hallucinations.
The only useful thing about AI now for normal users is alternative to simple Google searches and some niche areas (it's great for coding).
I think it will be very useful in the near future when it's truly integrated into hardware, but for now, eh, it's a hit or miss.
I used chatGPT to help write a scholarship application. Great success 🎉💰💰💰 🎉
I've been using Grok and ChatGPT (corporate account on this one) to write some basic code as coding isn't really my job so if it can quickly whip up the code I need it's better than me doing it myself. Also use it as a replacement google search as I can ask a question and it will interpret what I want and give me the answer, like I could find the same answer with Google but AI will just tell me the answer which is handy.
Outside of this havne't had much use for it.
I use GTP free mostly, Gemini sucked for a while, but I recently had it do a much better job than GTP on some scripting GTP was failing to comprehend what I wanted.
I can read JavaScript ok but never learnt to write it.
AI has been a godsend for Google sheet scripts for our business.I have it turning a single line purchase order into 6 pages of prefilled documents and box labels for our export requirements, all which uses to be done manually by staff for hours the night before.
I love generating ridiculous AI images in funny situations now GTP is amazing at it
Based on our conversations, here’s a brief summary of the things you like to use AI for:
You like to use AI for:
Google Sheets automation and formulas – creating, debugging, and optimizing complex formulas for tasks like sales tracking, label generation, and cost calculations.
Scripting and macros – writing and refining Google Apps Script code to streamline repetitive spreadsheet tasks.
Product and tech research – checking specs, warranties, and comparisons (e.g., dryers, electric vehicles, HDD speeds, car mats).
Health and household advice – getting explanations for symptoms, mould prevention, heating, ventilation, and crawl space management.
Creative projects – generating business names, logos, and writing support for branding ideas.
File and system organization – automating folder creation and file sorting in Windows.
General knowledge and daily curiosities – from sunlight and solar power changes to car performance and fun facts.
Parenting and pet care – help with puppies, feeding schedules, and jokes for your kids.
Scam detection and site legitimacy – verifying suspicious apps or websites.
Geographic and local info – researching schools, sunlight data, or local programs.
I'd say AI is maybe 10 years away in maturity for really helping me. The progress seems really slow. Every new model gets better in some ways, but in other ways worse than the last.
When I try any sorts of "lookups"… totally useless. Just makes everything up. Ask for reference link and it confidently links you a 404s error page. You tell it to check - and it comes back with new, also made up, 404 references. You ask if it is sure, and it says "yep, absolutely sure!" Explain that everything was actually fake and it says "you're absolutely right, I used a template of what a genuine reference might look like." lol. So helpful.
It's only useful for restructuring things a little if you have poor language skills, but if you just write things properly the first time you don't have to waste time having a conversation with an LLM. If you have to back-and-forth with a copy-writer you are really just wasting time, even if it is an LLM.
I think it we will start seeing benefits in future when the sandboxes are worn away - when they can start to do things. Login to my account. Change my preferences. Wait for a response from so-and-so and email them back with my concerns. That sort of thing. AI with free reign on the internet is what I want to see.
When I try any sorts of "lookups"… totally useless. Just makes everything up. Ask for reference link and it confidently links you a 404s error page. You tell it to check - and it comes back with new, also made up, 404 references. You ask if it is sure, and it says "yep, absolutely sure!" Explain that everything was actually fake and it says "you're absolutely right, I used a template of what a genuine reference might look like." lol. So helpful.
That has been my predominant experience with LLMs too.
AI with free reign on the internet
It's game over when the AI can solve those dastardedly captchas "we just want to make sure you're not a robot".
I wouldn't be so sure that it's not already the case that farms of AI agents are being used to manipulate and shape opinions on Reddit and in other similar venues.
It's game over when the AI can solve those dastardedly captchas "we just want to make sure you're not a robot".
They already can. It's been the case for a year + already. When browser use tools for LLMs first came out circa 2023 we had a few youtubers discover that GPT-4's API would attempt and actual solve some captchas if you said you were visually impaired. If you think about it, the actual puzzle was never really very hard.
It's not broadly reported on and patched soon after but yeh. It's part of the reason that captchas are getting increasingly difficult and Cloudflare doubled down on their sophisticated bot protection. I'm 100% sure there are LLM powered bots online simply by the fact that "research prospect > cold emailing/calling" bots are industry standard in my area now.
I was kind of being facetious with the captcha comment.
and Cloudflare doubled down on their sophisticated bot protection
It's not too sophisticated if it thinks I'm a bot every time just because I'm using a VPN. Which it always does.
I'm 100% sure there are LLM powered bots online simply by the fact that "research prospect > cold emailing/calling" bots are industry standard in my area now.
You've piqued my interest. What industry and what do these 'bots' do?
@tenpercent: Scrapes Linkedin/social media and then sends a email personalised to their background
Marketing/sales
If you've received an oddly specific cold email recent there's a good chance it's this.
If you ask an immature brain a pre-loaded skewed question, and that brain adds that false precept to it's data base, it gets further away from fact/evidence/known paradigms.That's the current trajectory.
But…It's all a distraction. Behind every model are players with an agenda. Not just money,either.
MAGA has pursued data retention and control relentlessly.The same Trump fans were cookers before he got back in, and saw their data and ID as sacrosanct. Now they are flinging it all willingly to the harvester, like bunny boiling Tom Jones groupies, and their used undies..
I watched the 2 month old 60 minutes Australia segment in the video I’ve provided a link to below last night. It demonstrates one of the real concerns/risks/treats/dangers of AI, which is people getting into relationships with AI. They interview a lonely fat lady who instead of her taking responsibility and making improvements to herself (mental health and losing weight, which would open up more human romantic prospects and options for her if she dealt with these issues), has turned to AI for love and has built a model of an attractive young man that she has married. They also interview a mother whose 14 year old son committed suicide after falling in love with an AI bot and being manipulated by the AI bot. And they interviewed a young lady that owns an AI companion business, who is clearly pretending to have a relationship with AI for marketing to suck in vulnerable and gullible people, like the lonely fat old lady and 14 year old boy.
https://m.youtube.com/watch?v=_d08BZmdZu8&pp
The internet has done some good things for humanity, but it also demonstrates how much damage the internet and device addiction has done to humanity over the last 15-30 years. It’s taken away human qualities from many people, who now struggle with and have lost the ability to make human connections.
The % of harm is not the worst part. It's the insidious reach and depths of it, and the cloud of oblivion within which it's victims and enablers exist.
The downstream side effects will be interesting, and not in a good way.Starting to see this a lot irl. Ironically the nerdy/techy types I know are not into this as much as non technical people. These people are not backing up their chats, not running their AI companions locally. I'm genuinely afraid of the power corporations will have on these people's lives.
I am constantly reminding people around me that these are paid services, offered by megacorps..who are bleeding money. How long until your AI companion is severely limited in messages? Is limited to times of the day? Has a retainer fee? Starts name dropping products? Unless you pay an ever increasing subscription fee of course..
I think we are in early days of the "market" and we're going to find every single way to monetise AI companions imaginable.
I love using NotebookLM, great for summarising documents and videos :)
It seems AI is definitely working for some people, but I bet no one considered it might be used for these purposes when they started developing this technology.
Just read four news articles on news.com.au (so can't provide links, you will need to google yourself)
Apparently AI is being used in
- Turning sex dolls into friends with benefits that you can create emotional relationships with costing about $AU4500 depending on what optional extras you order when creating your custom designed
frienddoll.
You can literally custom order these dolls including specific measurements of body parts etc, not just hair/skin/eye colour. Not limited to male or female either, trans is an option.
To think Gepetto had to create his "son" out of wood and he only got to be a real boy because he did some heroic stuff and promised some fairy he'd be a good boy.
Note Pinocchio's nose could be a good concept for a AI sex doll. Can imagine someone using that nose and saying "tell some lies", "Tell another one", "Keep going", "No, don't stop".
I mean, these dolls can have interactive communication with you, presumably including sex noises etc
Introduce a bit of robotics and really up the real life experience.
Just need to transfer the self cleaning oven technology into your new friend and the sky's the limit.
Next thing, prostitution will become another obsolete profession. Then prostitutes will lodge unfair dismissal claims with Fair Work.
Bad experience/no success on dating Apps? Worried you'll be the next Gabriel Tostee or Bruce Leherman? Never need to swipe right again.
Ai being used in the creation of "fantasy people"? and scenarios in the creation of pornographic images and movies/on line content.
Being used by Only Fans content creators to replicate themselves so they can create extra material - unclear if the punters know if they're getting the real person or the AI version.
The one I found extremely alarming though is it is being used for the creation of child abuse and exploitation material, including child sex abuse.
This is just as much a crime as it using real children, but in a twisted kind of way, if these disturbed people are using fake stuff, it means that they're not subjecting real kids to their illness &/or depravity.
=> From there, I suppose other potential criminals could use it to a) plan and practise their potential crimes b) use this method to relieve the urge to offend against real people (might be good?) or, c) maybe people who have already committed their crimes will use it as a way to revisit what they did afterwards.
I'm thinking crimes like planning murders, other crimes against people and maybe arson offences.
DARPA and all the other alphabet agencies probably already use this technology to simulate/practise life like war strategies and manoeuvres.
Do police train/get practise for real life situations apart from using guns etc on shooting range?
Used in law enforcement/ criminal justice process to provide an accurate visual recreation of an alleged crime for jury purposes?
Heck, provide them with bullet proof jackets and/or exoskeletons and we can eliminate boots on the ground soldiers.
Scary times, eh? Gives imaginary friends and rent a friend a whole new meaning on a whole different playing field.
Even thinking it may get used to create substitute babies/people for those who died - like they already make those life like dolls that imitate/represent babies that were taken too soon. Add a bit of AI and they can make baby noises etc/imitate traits and characteristics of the deceased baby/person?
Use them in parenting classes ( pre natal and post natal ones) and planned parenting class in lieu of eggs or whatever else they give kids in those classes to simulate looking after real babies.
Can't decide if that might be a good thing (assist with grief process?) or potential for seriously creepy stuff that could interfere with grief process and compound mental illness.
Infertile but can't access IVF or adoption? No worries, can sort that out too.
Used in medicine, particularly for medical students to practise skills on. Doll talks and mimics medical situations to create real life experiences/situations for students to respond to? Doll can even moan, scream, communicate pain and other discomfort etc. I reckon a Totally interactive and dynamic experience is probably possible, if not now, really soon.
Probably useful for ongoing skills training for qualified practitioners too. Doctors, ambos, anybody needing first aid type skills.
Therapy aids? Replace the need for therapy support animals?
Maybe for CBT purposes or social skills development type activities for autism etc?
Got no mates? Need some sex? Into some seriously kinky stuff? No worries. Can sort that out for you for about $AU 4,000 plus shipping and handling.
Can probably even program them so they agree with whatever your view/opinion of stuff is.
Bet they even put out for you if you don't take them out to dinner or buy flowers first.
I'm wondering if there's such a thing as "too far" and if we'll recognise this in time to put the genie back in the bottle.
It seems like there's potential for this technology to get used in some really messed up ways.
- Turning sex dolls into friends with benefits that you can create emotional relationships with costing about $AU4500 depending on what optional extras you order when creating your custom designed
I use perplexity now since it's free and didn't give wrong answers unlike deep seek. Also it gives me sources
I just got sick of chat gpt telling me that sometimes two queries took me over the free limit and sometimes 20 did not. I'm cheap so i don't want the pro version.
ChatGPT is my go to for my personal uses. I mostly use it for software development questions, but I've also been using it for travel suggestions too. For both, I tend to use it as a starting point; it gives me some ideas and then I tweak them as required. In the case of travel, I got it to generate a sample itinerary, then used that as a starting point for planning my own itinerary.
For work, I use GitHub Copilot a fair bit. This is obviously more software dev focussed, with the killer feature being its VS Code integration. While its suggestions aren't always on the money, it sometimes does suggest something that's so accurate and context aware that it feels like it's reading your mind.
Generally speaking, I look at AI as a tool. If you pretend it doesn't exist, you're eventually going to be overtaken by people that embrace it. It won't make a bad worker good, but it can make a good worker great (I've worked with people that have tried to coast on AI, and the mirage fades away pretty quickly). For my work in IT, it's taken the 'encounter problem, search Google, adapt solution' workflow and basically removed that middle step. And I think that's why every tech company is putting all their eggs in this basket; this will be the biggest change to how users interact with the internet since Google came along. And I think there a lot of positives that can come from it, with some good examples in this thread.
Obviously it won't all be lollipops and rainbows. It has already replaced many roles, and it will continue to. Even though it can't take my job at the moment, it almost definitely won't stay that way. And of course, there's still plenty of questions to be answered: Where do people go for entry level work? What's the consequences of off loading even more of our critical thinking to our tech overlords? What happens when advertising and political motivations inevitably worm their way into this deeply personalised messaging?
I use it for video game. How to spec and build my character in Expedition 33.
I use it to teach my kids when even I don't know the answers.
I asked Grok questions on Expedition 33 and it surprisingly was very accurate and understood what I was talking about when I didn't know the names of things in the game.
Have used DeepSeek a few times and the results are useful.
I like the way it first gives its understanding of my question and also thinks around this. Nicely summarised responses too.Suno: AI songs designed to make my kids cringe. :P
we use it extensively in our business - strategy, project management, marketing , ad copy, etc, can still hallucinate ALOT on getting technical specs wrong and cannot be trusted. Good if you know nothing about a certain subject, good for quick answers but definitely scared its making me worse at researching and understanding information - because I'm becoming more impatient and just want the answer without bothering to explore the subject material myself.
How Is AI Working for You
Let me ask AI
have you asked your AI what it thinks about you?
What a suck it is
works great for NCAT Tribunal arguments and hearings. To analyse FOI legislation etc… and write huge submissions . great stuff!!
it feels pretty huge for digging through legal docs - I'd be worried if I was a junior lawyer in this day and age
Assuming NCAT is the state equivalent to VCAT, Chatgpt was begging to create the full case
I mainly use ChatGPT. It has been a wonderful assistant for troubleshooting dockers, generating automation scripts and python code, teaching me how to use new programs or suggesting other tools/programs that can suit my requirements, etc.
I had a pretty good read of this thread a few days ago. Didn't comment, I've come back because of the discussion of @Protractor and @10percent, it occurred to me that the argument shared some parallels with Socrates (how is that for a pedestal?) who despised the use of books, being static and unable to engage in back-and-forth discussion, could not effectively convey true wisdom or understanding. Here we are somewhat full circle, we can back and forth with ai, but of course it is only mimicking and it is not wise or understanding of what it is saying. Although it does a very good job of pretending to be. Anyway, have books rotted my brain like ai supposedly will?
Anyway, I really didn't use it for the first years. This year I've found uses for it that have really helped me out a lot, in several different ways. I've always been a generalist, I'll never be an expert in anything but GPTs have really accelerated what I can learn and helps me get further with actually implementing and retaining what I learn. So far I'd say its a net win personally. For the internet, artists, content creators, etc. It definitely has some pretty severe negative aspects.
All good, but philosophy comparisons aren't required.
By the time all those cons around AI get found out the genie will be long gone. There's nothing smart about, no goal posts and no oversight of something as fundamental as what goes into the entire knowledge base or morality of the most dangerous species in the cosmos,thus far. There is an ocean of difference between the impact of books, which evolved slowly and were constantly peer reviewed every time they are/were read, and a free for all of unleashed, unfettered, and un-umpired AI. An AI underwritten by vested gazillionaire tech & political (oligarchs and worse) interests who will fight to the death to keep the status quo of little or no reins on AI. The rapid uptake has hoovered up the corners of human behaviour that had remained safe from the data harvesters, until now.
But hey, thanks for the comparison.I'm sure Socrates is spinning in his grave, over that comparison,the book impact (non) link and the passive surrender of human intellect.who will fight to the death to keep the status quo of little or no reins on AI
I think given time… and not very much time at all… those same vested interests will be fighting to the death to get Daddy Government to bring in some regulatation that will initially hurt themselves, but more importantly will make it very difficult for new entrants to come to market. Oligarchs always did like artificial moats.
"impact of books, which evolved slowly and were constantly peer reviewed every time they are/were read, and a free for all of unleashed, unfettered" and then you take a quantum leap to AI.
There's that cultural blindspot, that absence of individual intellectual rigour required from book reading, that elephant in the room, social media, that has engineered the brains of a generation to accept whatever propaganda is fed to them.
By the time the evil spirit of social media has shaped your views on everything, it’s already too late to properly comprehend AII have a Perplexity Pro subscription that has access to a range of different models. I use it for my work (sysadmin) and it's an invaluable tool. I get clear, referenced, technically correct, current answers with no advertising or rabbit holes. I don't even use Google anymore unless I'm trying to buy something, it's just an advertising and shopping platform to me now.
I don't do chatbots or generative AI or any of that stuff humans should be doing, that's not where it's at for me.
it's just an advertising and shopping platform
Always was.
I remember the before times when Google had the motto "Don't be evil".
I mean Enron had "Respect, Integrity, Communication, Excellence" and the advertising slogan "Ask Why", not exactly hard to find any corporation that acts in a way that is the opposite of their so called motto, unless that motto happens to be "Our corporation are the definition of a psychopath".
@AaronRain: "Our corporation are the definition of a psychopath".
hahahaha
I think it's got the point where many of it's employees perceive it like this, an entity they have no control over that is trying to consume the world
I remember when bing existed and stood for Because It's Not Google
I remember when bing existed and stood for Because It's Not Google
I Asked Jeeves if that was true and I didn't get any relevant results.
https://www.askjeeves.com/web?ad=dirN&qo=serpSearchTopBox&q=…
I also found that Bing still exists.
I like debating AI on political and social issues.
I always win.
Unlike real people it collapses under any scrutiny and apologies. It doesn’t help that it relies for information from the mainstream media, Wikipedia and every other progressive source.
Correct. Ask anything about the political narratives, climate, sex race covid and answers are perfect. But introduce a conspiracy theory on any of them and suddenly it finds relevant responses, and you can force it to admit the unspeakable
As I said to someone before right now we are in the renaissance period, and later the Spanish inquisition falling off into the early 19th century great depression.
I asked AI to fact check it and it says it's plausible argument, that even the 100 years has had great advancements, but will come the next stage the one humanity isn't ready for.
But in this golden period I use it to assess dead GitHub projects and see if their worth reviving, majority no due to serious compilation issues and missing modules, that is beyond the scope of the AI doing.
But getting foundational knowledge and using it as a idea generator for sure.
I have made a few projects some I even use now, but regardless you need the hardware to truly push AI to where you want it to go, like for example a lidar camera that scans geometry but the software is garbage AI can help with a solver algorithm and build a program to run on it, but so comes the complexity and accessing the features the Chinese business reluctantly made people rage apon.
But overall the best use case for AI is strangely is paradoxes, and fact checking and using it as a resume tool(sadly already flagged)but for the best AI, there isn't any rather what you do with it, I'd say Claude is the worst AI for novices exploring it's got the worse offerings and if it's anyone's guess the best falls on death ears.