> Then the trend quietly died, as trends do. Not because anyone decided carousels were bad. Just because something newer came along to copy.
> [...]
> I've started asking clients a simple question when they bring it up. Not to be difficult, just to understand.
> [...]
> It's not about utility. It's not even really about the chatbot. It's about visibility, the fear of looking behind.
> [...]
> No pop-ups. No blinking corners. Just content, clear and immediate.
Itās been long enough that this might even have plausibly come from a human with LLM writing overrepresented in their brain rather than an LLM. But either way thereās this record-scratch feeling that I experience on each one of these, and (fittingly) it just completely knocks me out of the groove, requiring deliberate effort to resume reading.
And, I mean, none of these is even bad in isolation, but it sure feels like weāre due either a backlash where these patterns become underused even when appropriate, or them becoming so common they lose their power (is syntax subject to semantic bleaching?). Or perhaps both. Socioliguists are going to have a blast.
Have courage and trust your own instincts. Unless one is extremely disagreeable it's very tempting to hedge and avoid outright saying "this is AI" just in case you're wrong, but if you're literate and regularly exposed to AI outputs your instincts are likely quite accurate.
In this particular case the linked article is definitely AI generated.
I started off hedging but by the end of the comment came to think that AI useāor lack thereofāwas actually beside the point. I have feelings with regards to the situation where āthe situationā includes some largely irrelevant-to-writing things like the mainframization and the āfeelingsā are not nearly coherent enough to graduate to thoughts. Thus (unlike some others) I donāt think that calling out writers or warning readers about AI is all that useful (or for that matter courageous). With respect to writers who use AI due to a lack of confidence, itās probably even harmful. (Saying that as a person who manages to absolutely suck in embarrassing ways in multiple foreign languages. And also in English but less obviously. And likely in my native language too due to lack of use.) Meanwhile, TFA makes a decent point, and I am in no position to criticize people for being wordy.
The thing is, by now it doesnāt actually matter if AI or not AI or partly AI or whatever, because the record scratch is still there and still breaks my immersion. I could be oversensitive (I definitely am to some other English-language things, and also feel that others are to yet other things like em dashes), but it feels like thereās a new language/social-signalling thing now, and you may have to avoid it even if youāre not an LLM.
Indeed, consider these two posts linked below also from this blog. They look the same, they maintain the same impersonal writing style. There's no humanity to it at all.
They maintain such a consistent paragraph length that they're either a professional copyeditor or, as is clearly the case, are an LLM.
Humans deviate a lot more than this, they use run on sentences or lose the thread in their writing.
This blog however reads like every-other post on LinkedIn. Semi-professional tone, with a strong "You, Me" hook to most posts.
I encourage everyone to make an LLM-generated blog, don't post the articles anywhere, but generate one, to get a feeling for how these things write.
Because this is unmistakably LLM. I'd even go so far as to identify the model of these particular posts as ChatGPT.
Yet when we point this out, we're told it is "unmistakably human" and that we're rude for pointing it out.
What does that have to do with anything? These days any piece of text may or may not be AI generated (my money would be heavily on "no" for the post you asked about), but either way it isn't blatant slop so we can't tell.
It feels like you're trying for a lazy gotcha, but the actual point here is something like "AI models often generate writing with specific noticeable characteristics that make it obviously AI output, and TFA is an instance of such writing, and this should be called out when possible"
> āā¦thereās this record-scratch feelingā¦ā
The op is a blog post. Youāre talking about blog post writing. Maybe you just donāt like their style?
Itās also true llm second drafts are a thing.
And itās true both can ārecord scratchā you right out of attention.
As well as the now present trend as readers to be impatient and quickly bored.
And this criticism of writing style (for my take this article is perfectly readable)āwhat is the aim? Call for writers to perform some kind of disclosure? Because without a goal, it sounds like complaining you donāt like the soup.
LLMs don't "own" this writing style. By definition they can't - they were trained on human writing after all! People wrote like this before and that's fine. You might not like the style, but saying it's because LLM writing has infested their brain is wrong, dismissive and dehumanising.
Any style can cross the border into bad and get in the way of itself when it's turned up to 11, no matter who wrote it.
There've been stylistic fads before LLMs where a thing, with results just as chalkboard-screech-inducing as the current one. That this one is just a button-push away does make it worse, though, because it proliferates so greedily.
Bad writing is bad writing, and writing like an LLM is writing like an LLM. We should be able to call this out. In fact, calling out the human responsibility in it is the very opposite of dehumanizing to me.
Yes, definitely, but the parent post was quite explicitly saying it was either LLM generated or the person's style was influenced by consuming LLM content.
Sure, call the style bad or even similar to LLMs, but there's no reason to believe the style came from LLMs. It existed before and people who used it before still exist and still use it now.
Hell, this person seems to be a web(site) developer, that's a very marketing-speak-heavy field. It's far morely likely that's where they "caught" thos style. It happened to me too back when I was still in it.
I think the original comment is much more open-minded towards the author of the TFA than you are to the commenter.
> explicitly saying it was either LLM generated or the person's style was influenced by consuming LLM content
We might disagree here, but if we're strict they did not say "either/or", especially not explicitly. They raised two possibilities, but didn't exclude others.
> there's no reason to believe the style came from LLMs
They say "might" and "plausibly". I think there's no belief there until you assume it.
And even if: It's not unlikely that a contemporary author's mind is influenced by the prevalent LLM style. We are influenced by what we read. This has been happening to everyone for ages, without anyone questioning the agency of writers. There's nothing wrong with suggesting like that could be the case here. It's entirely human.
I know it's easy for one's mind to jump to conclusions, but I am not a fan of taking that as far as accusing someone of "dehumanizing" others. Such an escalation should ideally cause a pause and a think, before pressing submit.
Nah, the two possibilities were in fact exclusive in my mind (subject of course to the usual likelihood of any one thing I say being completely wrong, but thatās always in the background and not that useful to constantly point out). And it might be fair to say that it is unwise to attempt this kind of amateur psychoanalysis in public. Itās just that I donāt see being influenced by things you read as a big deal, let alone an accusation, let alone a dehumanizing one. See my neighbouring comment[1] for more on the last point.
I wonder how much marketing copy has poisoned the "default" writing style of LLMs, it surely has those undertones of pitching a sale in an uncanny valley way.
LLMs donāt own these expressions in the same sense that McDonaldās doesnāt own salt: they are undoubtedly making use of a strong reaction that humans have hadāhave been havingālong before; but they did develop a way to mash that button on an industrial scale like few before them. (With of course a great deal of help from humans, be it via customer surveys or RLHF; or you could call it help from Moloch[1] in that the humans unwittingly or negligently assembled themselves into a runaway optimizer.) So I think itās fair to say that LLMs do own this style, as in the balance of ingredients, even if they do not own the ingredients themselves. And anyway nothing in the social perception of language cares about fairness: low-class English speakers did not invent negative agreement (ādouble negativesā), yet it will still sound low-class to you and even me (and my native language requires negative agreement).
As for being dehumanizing, perhaps I did commit the sin of psychoanalysis at a distance here, but Iāve felt enough loose wires dangling out of my brainās own language production apparatus that I donāt think pointing out the mechanistic aspects reduces anyoneās humanity.
For instance, nobody can edit their own writing until they forget whatās in itāthatās why any publishing pipeline needs editors, and preferably two layers of them, because the first one, who edits for style and grammar, consequently becomes incapable of spotting their own mechanical mistakes like typos, transposed or merged words, etc. Ever spotted a bug in a code-review tool that youāve read and overlooked a dozen times in your editor? Why does a change in font or UI cause a presumably rational human being to become capable of drawing logical inferences they were not before? In either case, there seems to be a conclusion cache of sorts that we canāt flush and canāt disable, requiring these sorts of actually quite expensive hacks. I donāt think this makes us any less human, and it pays to be aware of your own imperfections. (Donāt merge your copy- and line editors into a single position, please?..)
As for syntactic patterns, Iāve quite often thought of a slick way to phrase things and then realized that Iād used it three times in as many sentences. On some occasions Iāve needed to literally grep every linking word in my writing to make sure I havenāt used a single specific one five times in a row. If you pay attention during meetings or presentations, youāll notice that speakers (including me!) will very often reuse the questionās phrasing word for word regardless of how well it fits, without being aware of it in the slightest. (Iām now wondering if lawyers and witnesses train to avoid this.) Language production is stupidly taxing on the brain (or so Iāve heard), so the brain will absolutely take every possible shortcut whether we want it to or not.
Thus I expect that the priming effect Iām alleging can be very real even before getting into equally real intangibles like ātasteā. I donāt think it dehumanizes anyone; you could say it dehumanizes everyone equally instead, but my point of view is that being aware of these mechanical realities of the mind is essential to competent writing (or thinking, or problem solving) in the same way that being aware of mechanical realities of the body is essential to competent dancing (or fighting, or doing sports). A bit of innocence lost is a fair trade for the wisdom gained.
(Not that I claim to be a particularly good writer.)
None of those 4 look like AI slop to me. They lack the strange non-sequitur nature these contrasting statements generally have when made by AI. The version of the third example I would expect from a clanker would be more like
> It's not about utility. It's not even really about the chatbot. It's about novelty of talking to a machine
Which of course doesn't connect to the rest of the article contents, because the AI doesn't have any intention in its writing.
My partner works at a nonprofit and they paid some consultant for a chat bot. The next month they were surprised they got a $2000 bill for the API use and at first wondered if the bot was really popular. The analytics reveled that very few conversations were happening.
The consultants apparently had the bot load and fed it an immediate prompt which greeted the user. This was happening on every page load. Bad consultants, bad bot.
The amount of consultants that are very known and have large presence on developer communities and give a lot of talks and have no idea how to approach real world problems is impressive.
āIt's about visibility, the fear of looking behindā
This sums up everything driving the tech sector right now. From execs at big tech to nobodies on X.
EDIT; if I think about the nature of it. The visibility fight is the decreasing attention with increasing channels and noise. Visibility tactics go to the extreme. And the fear of looking behind comes from the previous tech cycles and the thoughts around what if you had missed those? And maybe those with the most fear are the ones that did.
It's always been like this. I used to build websites in the 90s and it was exactly like that. It was also horrible. People who had no tech background whatsoever making decisions on which tech to use (PHP vs ASP vs ColdFusion, remember those?); overpaying agencies to make HTML "templates" that had to have round corners everywhere. Etc.
Not everything's great today, but it's a little less bad I think.
I donāt know. I think back to my first dialup connection and getting internet for the first time. In no way do I remember fear being a driver. I remember people being curious. Nobody ran around saying you need to get on the internet or you will be left in the dust. Would be curious if anyone had examples of this if I am wrong. Youtube links to old news broadcasts or magazine print ad archive or something.
"Adopt or be left behind" and the quality of the thing you're adopting relies heavily on how much training it receives by the users who are scared of being left behind.
I had the same experience with chatbots, but we shipped a chatbot module a year ago that helps with complex config questions by reading and answering based on a Salesforce Experience site.
I was skeptical but it gets a 68 NPS from users, even if we do get the occasional "why are you investing in AI I hate it" coming through the feedback channel.
As ever, the issue is "what problem are you solving". If it's that you want more people to put their hand up and talk to you/order something, chatbots seem like a bad solution. If it's that you have a ton of complex docs that people have to read in order to implement and use your product, it's not the solution but it's probably part of a solution.
These chatbot and google login are my most hated feature of current web.
Obviously it just a script embedded in the page, so it has not actual place in the design. So the effect, especially on mobile, is this dance of starting to read a page, have it obscured by annoying popups, and trying (and failing) to close the popup with the hidden 12x12 pixels x button.
Just like the entire ads market, itās all forgery to drive up clicks so owners can say to the clients that there is interaction.
Donāt get me started on the recent YouTube ads on iPad that place a banner that sits on top of the video, hiding subtitles, and closing it is behind a menu that requires you to be a brain surgeon specialist in order to interact with, instead of clicking the ad itself. I currently have 15 tabs in safari for ads that I inadvertently clicked.
Same energy as the carousel era. The client doesnt actually want a chatbot, they want to not feel behind. The question nobody asks is 'what would this chatbot actually do that a good FAQ page cant?' and usually the honest anwser is nothing, but it looks modern and thats enough to get through the meeting.
I just went through some of the posts and you are right. It's very suspicious, but I would say it's right at the edge of being plausibly written by a human. If it's LLM, then it's the first one I'm aware of that got me this good. I am usually the first one to point out that something reeks of LLM writing here (which I'm kinda ashamed of, considering how much I've been doing this).
Tbh the whole smolweb concept by this person seemed kinda weird right when I discovered it was a thing. It seems to not really be a thing but the person is really trying to convince you that it is
I mostly agree but some recent experiences with voice chat bots give me pause:
Fedex has now a voice bot when you call and it is kind of good and fast. I mean faster than navigating their website. It picks up directly after some boilerplate. It can understand me.
With website chatbots we could have similar leaps if they are done well and have access to CRM/ERP etc. to actually help you.
I've built chatbot demos for big corps like Walmart and other non-tech brands. What they want is "something that looks AI." The problem with chatbots is they don't work.
I love the site, but it's also worth noting that because it is not mobile-friendly it can afford to take full advantage of its efficient catalog nature and not feel the need to make compromises. Sometimes I wish we had said "browsers are for desktops, apps are for tablets/phones" and never tried to combine the two.
I stress over this with my own website-for-work. If I make the developerās version of my site, who am I talking to? Other devs. If I make the version that appeals to agencies and casual users, thereās a constant voice in my head trying to drag me back to something simpler, lighter, judging me for that threejs hero section. As with all things, I guess itās a matter of finding the right balance. Web development sure is in a very strange place and transitioning hard right now - off topic but Iām seeing more and more people looking for work and fewer and fewer job postings, especially for freelancers like myself. But maybe Iām not advertising AI bot integrations hard enough.
I think an important subtlety here is that clients/ānormiesā look at different websites to us, so the taste in websites that they cultivate is different to ours.
> Then the trend quietly died, as trends do. Not because anyone decided carousels were bad. Just because something newer came along to copy.
> [...]
> I've started asking clients a simple question when they bring it up. Not to be difficult, just to understand.
> [...]
> It's not about utility. It's not even really about the chatbot. It's about visibility, the fear of looking behind.
> [...]
> No pop-ups. No blinking corners. Just content, clear and immediate.
Itās been long enough that this might even have plausibly come from a human with LLM writing overrepresented in their brain rather than an LLM. But either way thereās this record-scratch feeling that I experience on each one of these, and (fittingly) it just completely knocks me out of the groove, requiring deliberate effort to resume reading.
And, I mean, none of these is even bad in isolation, but it sure feels like weāre due either a backlash where these patterns become underused even when appropriate, or them becoming so common they lose their power (is syntax subject to semantic bleaching?). Or perhaps both. Socioliguists are going to have a blast.
Have courage and trust your own instincts. Unless one is extremely disagreeable it's very tempting to hedge and avoid outright saying "this is AI" just in case you're wrong, but if you're literate and regularly exposed to AI outputs your instincts are likely quite accurate.
In this particular case the linked article is definitely AI generated.
I started off hedging but by the end of the comment came to think that AI useāor lack thereofāwas actually beside the point. I have feelings with regards to the situation where āthe situationā includes some largely irrelevant-to-writing things like the mainframization and the āfeelingsā are not nearly coherent enough to graduate to thoughts. Thus (unlike some others) I donāt think that calling out writers or warning readers about AI is all that useful (or for that matter courageous). With respect to writers who use AI due to a lack of confidence, itās probably even harmful. (Saying that as a person who manages to absolutely suck in embarrassing ways in multiple foreign languages. And also in English but less obviously. And likely in my native language too due to lack of use.) Meanwhile, TFA makes a decent point, and I am in no position to criticize people for being wordy.
The thing is, by now it doesnāt actually matter if AI or not AI or partly AI or whatever, because the record scratch is still there and still breaks my immersion. I could be oversensitive (I definitely am to some other English-language things, and also feel that others are to yet other things like em dashes), but it feels like thereās a new language/social-signalling thing now, and you may have to avoid it even if youāre not an LLM.
Indeed, consider these two posts linked below also from this blog. They look the same, they maintain the same impersonal writing style. There's no humanity to it at all.
They maintain such a consistent paragraph length that they're either a professional copyeditor or, as is clearly the case, are an LLM.
Humans deviate a lot more than this, they use run on sentences or lose the thread in their writing.
This blog however reads like every-other post on LinkedIn. Semi-professional tone, with a strong "You, Me" hook to most posts.
I encourage everyone to make an LLM-generated blog, don't post the articles anywhere, but generate one, to get a feeling for how these things write.
Because this is unmistakably LLM. I'd even go so far as to identify the model of these particular posts as ChatGPT.
Yet when we point this out, we're told it is "unmistakably human" and that we're rude for pointing it out.
https://adele.pages.casa/md/blog/the-joy-of-a-simple-life-wi...
https://adele.pages.casa/md/blog/finding_flow_in_code.md
Is this comment LLM generated?
What does that have to do with anything? These days any piece of text may or may not be AI generated (my money would be heavily on "no" for the post you asked about), but either way it isn't blatant slop so we can't tell.
It feels like you're trying for a lazy gotcha, but the actual point here is something like "AI models often generate writing with specific noticeable characteristics that make it obviously AI output, and TFA is an instance of such writing, and this should be called out when possible"
> āā¦thereās this record-scratch feelingā¦ā
The op is a blog post. Youāre talking about blog post writing. Maybe you just donāt like their style?
Itās also true llm second drafts are a thing.
And itās true both can ārecord scratchā you right out of attention.
As well as the now present trend as readers to be impatient and quickly bored.
And this criticism of writing style (for my take this article is perfectly readable)āwhat is the aim? Call for writers to perform some kind of disclosure? Because without a goal, it sounds like complaining you donāt like the soup.
LLMs don't "own" this writing style. By definition they can't - they were trained on human writing after all! People wrote like this before and that's fine. You might not like the style, but saying it's because LLM writing has infested their brain is wrong, dismissive and dehumanising.
Any style can cross the border into bad and get in the way of itself when it's turned up to 11, no matter who wrote it.
There've been stylistic fads before LLMs where a thing, with results just as chalkboard-screech-inducing as the current one. That this one is just a button-push away does make it worse, though, because it proliferates so greedily.
Bad writing is bad writing, and writing like an LLM is writing like an LLM. We should be able to call this out. In fact, calling out the human responsibility in it is the very opposite of dehumanizing to me.
Yes, definitely, but the parent post was quite explicitly saying it was either LLM generated or the person's style was influenced by consuming LLM content.
Sure, call the style bad or even similar to LLMs, but there's no reason to believe the style came from LLMs. It existed before and people who used it before still exist and still use it now.
Hell, this person seems to be a web(site) developer, that's a very marketing-speak-heavy field. It's far morely likely that's where they "caught" thos style. It happened to me too back when I was still in it.
I think the original comment is much more open-minded towards the author of the TFA than you are to the commenter.
> explicitly saying it was either LLM generated or the person's style was influenced by consuming LLM content
We might disagree here, but if we're strict they did not say "either/or", especially not explicitly. They raised two possibilities, but didn't exclude others.
> there's no reason to believe the style came from LLMs
They say "might" and "plausibly". I think there's no belief there until you assume it.
And even if: It's not unlikely that a contemporary author's mind is influenced by the prevalent LLM style. We are influenced by what we read. This has been happening to everyone for ages, without anyone questioning the agency of writers. There's nothing wrong with suggesting like that could be the case here. It's entirely human.
I know it's easy for one's mind to jump to conclusions, but I am not a fan of taking that as far as accusing someone of "dehumanizing" others. Such an escalation should ideally cause a pause and a think, before pressing submit.
Nah, the two possibilities were in fact exclusive in my mind (subject of course to the usual likelihood of any one thing I say being completely wrong, but thatās always in the background and not that useful to constantly point out). And it might be fair to say that it is unwise to attempt this kind of amateur psychoanalysis in public. Itās just that I donāt see being influenced by things you read as a big deal, let alone an accusation, let alone a dehumanizing one. See my neighbouring comment[1] for more on the last point.
[1] https://news.ycombinator.com/item?id=48073567
Only to a limited extent, the fine tuning of these models uses a much smaller more curated set to generate tone and defaults.
The whole corpus is in there, but the standard style is tuned for.
I wonder how much marketing copy has poisoned the "default" writing style of LLMs, it surely has those undertones of pitching a sale in an uncanny valley way.
So I will say that things I read were not written in this style.
And people I read had better ability to not put in unneceasary random completely made up facts or illogical implications.
LLMs donāt own these expressions in the same sense that McDonaldās doesnāt own salt: they are undoubtedly making use of a strong reaction that humans have hadāhave been havingālong before; but they did develop a way to mash that button on an industrial scale like few before them. (With of course a great deal of help from humans, be it via customer surveys or RLHF; or you could call it help from Moloch[1] in that the humans unwittingly or negligently assembled themselves into a runaway optimizer.) So I think itās fair to say that LLMs do own this style, as in the balance of ingredients, even if they do not own the ingredients themselves. And anyway nothing in the social perception of language cares about fairness: low-class English speakers did not invent negative agreement (ādouble negativesā), yet it will still sound low-class to you and even me (and my native language requires negative agreement).
As for being dehumanizing, perhaps I did commit the sin of psychoanalysis at a distance here, but Iāve felt enough loose wires dangling out of my brainās own language production apparatus that I donāt think pointing out the mechanistic aspects reduces anyoneās humanity.
For instance, nobody can edit their own writing until they forget whatās in itāthatās why any publishing pipeline needs editors, and preferably two layers of them, because the first one, who edits for style and grammar, consequently becomes incapable of spotting their own mechanical mistakes like typos, transposed or merged words, etc. Ever spotted a bug in a code-review tool that youāve read and overlooked a dozen times in your editor? Why does a change in font or UI cause a presumably rational human being to become capable of drawing logical inferences they were not before? In either case, there seems to be a conclusion cache of sorts that we canāt flush and canāt disable, requiring these sorts of actually quite expensive hacks. I donāt think this makes us any less human, and it pays to be aware of your own imperfections. (Donāt merge your copy- and line editors into a single position, please?..)
As for syntactic patterns, Iāve quite often thought of a slick way to phrase things and then realized that Iād used it three times in as many sentences. On some occasions Iāve needed to literally grep every linking word in my writing to make sure I havenāt used a single specific one five times in a row. If you pay attention during meetings or presentations, youāll notice that speakers (including me!) will very often reuse the questionās phrasing word for word regardless of how well it fits, without being aware of it in the slightest. (Iām now wondering if lawyers and witnesses train to avoid this.) Language production is stupidly taxing on the brain (or so Iāve heard), so the brain will absolutely take every possible shortcut whether we want it to or not.
Thus I expect that the priming effect Iām alleging can be very real even before getting into equally real intangibles like ātasteā. I donāt think it dehumanizes anyone; you could say it dehumanizes everyone equally instead, but my point of view is that being aware of these mechanical realities of the mind is essential to competent writing (or thinking, or problem solving) in the same way that being aware of mechanical realities of the body is essential to competent dancing (or fighting, or doing sports). A bit of innocence lost is a fair trade for the wisdom gained.
(Not that I claim to be a particularly good writer.)
[1] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
None of that feels like AI smell to me despite the "it's not X it's Y" framing. I can't really explain why though.
None of those 4 look like AI slop to me. They lack the strange non-sequitur nature these contrasting statements generally have when made by AI. The version of the third example I would expect from a clanker would be more like
> It's not about utility. It's not even really about the chatbot. It's about novelty of talking to a machine
Which of course doesn't connect to the rest of the article contents, because the AI doesn't have any intention in its writing.
My partner works at a nonprofit and they paid some consultant for a chat bot. The next month they were surprised they got a $2000 bill for the API use and at first wondered if the bot was really popular. The analytics reveled that very few conversations were happening.
The consultants apparently had the bot load and fed it an immediate prompt which greeted the user. This was happening on every page load. Bad consultants, bad bot.
The amount of consultants that are very known and have large presence on developer communities and give a lot of talks and have no idea how to approach real world problems is impressive.
"Bad consultants" you mean, that's the average consultant
āIt's about visibility, the fear of looking behindā
This sums up everything driving the tech sector right now. From execs at big tech to nobodies on X.
EDIT; if I think about the nature of it. The visibility fight is the decreasing attention with increasing channels and noise. Visibility tactics go to the extreme. And the fear of looking behind comes from the previous tech cycles and the thoughts around what if you had missed those? And maybe those with the most fear are the ones that did.
> right now
It's always been like this. I used to build websites in the 90s and it was exactly like that. It was also horrible. People who had no tech background whatsoever making decisions on which tech to use (PHP vs ASP vs ColdFusion, remember those?); overpaying agencies to make HTML "templates" that had to have round corners everywhere. Etc.
Not everything's great today, but it's a little less bad I think.
I donāt know. I think back to my first dialup connection and getting internet for the first time. In no way do I remember fear being a driver. I remember people being curious. Nobody ran around saying you need to get on the internet or you will be left in the dust. Would be curious if anyone had examples of this if I am wrong. Youtube links to old news broadcasts or magazine print ad archive or something.
Well, the marketing from the AI companies is working.
Thats the clever nature of the companies. They are playing on peoples fear to drive adoption. Its a bit sickening to me
"Adopt or be left behind" and the quality of the thing you're adopting relies heavily on how much training it receives by the users who are scared of being left behind.
Itās FOMO and it works every couple of years because the execs who buy in are different to the last lot of execs who got promoted/canned.
The obvious solution is to implement a mock chatbot that answers from a set of pregenerated wrong answers. Noone will know the difference.
Genius.
I had the same experience with chatbots, but we shipped a chatbot module a year ago that helps with complex config questions by reading and answering based on a Salesforce Experience site.
I was skeptical but it gets a 68 NPS from users, even if we do get the occasional "why are you investing in AI I hate it" coming through the feedback channel.
As ever, the issue is "what problem are you solving". If it's that you want more people to put their hand up and talk to you/order something, chatbots seem like a bad solution. If it's that you have a ton of complex docs that people have to read in order to implement and use your product, it's not the solution but it's probably part of a solution.
If you have the docs public assuming a good search engine you don't need the chat bot since users can use e.g. Google AI.
These chatbot and google login are my most hated feature of current web.
Obviously it just a script embedded in the page, so it has not actual place in the design. So the effect, especially on mobile, is this dance of starting to read a page, have it obscured by annoying popups, and trying (and failing) to close the popup with the hidden 12x12 pixels x button.
Just like the entire ads market, itās all forgery to drive up clicks so owners can say to the clients that there is interaction.
Donāt get me started on the recent YouTube ads on iPad that place a banner that sits on top of the video, hiding subtitles, and closing it is behind a menu that requires you to be a brain surgeon specialist in order to interact with, instead of clicking the ad itself. I currently have 15 tabs in safari for ads that I inadvertently clicked.
Same energy as the carousel era. The client doesnt actually want a chatbot, they want to not feel behind. The question nobody asks is 'what would this chatbot actually do that a good FAQ page cant?' and usually the honest anwser is nothing, but it looks modern and thats enough to get through the meeting.
> No pop-ups. No blinking corners. Just content
Your clients seem to have got what they wanted, or at least someone who has learned to write like one.
Come on, this is clearly human-written People have been writing like this for very damn long
It isn't "clearly human-written" at all, the entire blog looks like LLM output, right from the very first post.
I'm not witch-hunting, there are just a lot of witches.
I just went through some of the posts and you are right. It's very suspicious, but I would say it's right at the edge of being plausibly written by a human. If it's LLM, then it's the first one I'm aware of that got me this good. I am usually the first one to point out that something reeks of LLM writing here (which I'm kinda ashamed of, considering how much I've been doing this).
Tbh the whole smolweb concept by this person seemed kinda weird right when I discovered it was a thing. It seems to not really be a thing but the person is really trying to convince you that it is
I mostly agree but some recent experiences with voice chat bots give me pause:
Fedex has now a voice bot when you call and it is kind of good and fast. I mean faster than navigating their website. It picks up directly after some boilerplate. It can understand me.
With website chatbots we could have similar leaps if they are done well and have access to CRM/ERP etc. to actually help you.
I've built chatbot demos for big corps like Walmart and other non-tech brands. What they want is "something that looks AI." The problem with chatbots is they don't work.
>> A way of saying: we're keeping up.
Back in the day, websites could just put up an animated "under construction" gif.
Show your clients McMaster-Carr. It's not "simple". It is efficient.
I love the site, but it's also worth noting that because it is not mobile-friendly it can afford to take full advantage of its efficient catalog nature and not feel the need to make compromises. Sometimes I wish we had said "browsers are for desktops, apps are for tablets/phones" and never tried to combine the two.
I stress over this with my own website-for-work. If I make the developerās version of my site, who am I talking to? Other devs. If I make the version that appeals to agencies and casual users, thereās a constant voice in my head trying to drag me back to something simpler, lighter, judging me for that threejs hero section. As with all things, I guess itās a matter of finding the right balance. Web development sure is in a very strange place and transitioning hard right now - off topic but Iām seeing more and more people looking for work and fewer and fewer job postings, especially for freelancers like myself. But maybe Iām not advertising AI bot integrations hard enough.
Are casual users crying out for ai chat bots? From my experience the only stakeholder pushing for those is the business themselves.
By casual users, I mean non technical people who might reasonably be on my website because theyāre looking to commission work
Girl, give them ELIZA, they won't even notice.
I think an important subtlety here is that clients/ānormiesā look at different websites to us, so the taste in websites that they cultivate is different to ours.
Bring back lightbox!