For over a month now, I have been thinking, what is the value of a post if AI has become this excellent and web scraping AI programs are copying content and shunning them out in large numbers, more people are becoming lazier, and some of us who put in hard work are getting lesser and lesser value.
Companies with deep pockets are spending money on ads using AI-generated content, displacing well-researched content.
What does the future hold for developers, writers, content developers and product managers?
If we look behind the scenes and take a deep, introspective look at ourselves, we will find one undeniable truth: we are currently at war.
The war against Ai.
I believe in the future of AI, but such power can destroy us all if we do not take care. If you feel I am wrong, please share your thoughts.
I am listening.
Top comments (28)
Actually, to be lazy is bad, not AI. AI is just a tool. People are not ready to use this kind of tool, but this is another discussion.
When you say that AI can destroy our future, is a statement that was said by other people in other contexts, with Google, calculator, and other tools.
Also, I do not believe that AI will do the same job that programmers do today as some CEOs are trying to sell This belief comes from ideology, not science. I do not think that AI will ever replace humanity. It can not be trained by its own output, so AI will always depend on humans to work.
In my perception, there is no such thing as "war against AIs", we don't fight tools, we just use them.
While I agree with some of your points, we can also say smoking is bad not cigarettes, and my point is Ai is a tool, but humans are getting reliant, and it is affecting how we respond to each other.
Students now rely on AI for all tests and exams, killing imagination as the tool is not the problem No. I am not saying we should do away with AI, but I only ask if it can be regulated to some degree to have humanity retain its true identity.
In my opinion, you made a false assumption.
You said that my analogy is equivalent to the analogy "Smoking is bad for you, cigarettes are not". This is a fallacy.
Cigarettes have only one purpose, which is to smoke.
AI, as a tool, has many objectives, both bad and good.
That's why you can't say that my argument is the same thing as saying such a thing.
Furthermore, the fact that people misuse AI is not AI's fault (because AI has many good things to offer, unlike cigarettes).
The problem is the lack of user maturity. As you said, there are students who use AI to pass their tests. But the problem isn't with the tool, the problem is with the lack of morals that people have today (or ever since).
That's why I said "people aren't prepared to use this kind of tool, but that's another discussion".
And this explains why I don't believe we have a war on AI, but a war on morality.
People smoke cigarettes for a lot of reasons, good or bad, and while you are right, AI has a lot of benefits, which I cannot disagree with. Still, I am outlining one because it does offer more suitable, but one mistake can cause irreparable damage, and that is what I am outlining.
Finally, we are indeed in a period of who's AI is better and what stops humanity from taking it to the next level of weaponizing it.
Well, I think that when we talk about weapons we're entering a new topic.
But man uses almost every kind of knowledge he can as a weapon. AI will be no exception. So I still think that the problem is not with AI, but with morality.
Furthermore, I think that anything that is used to do harm has irreparable damage. A simple mockery can cause serious psychological damage to a person. And even if we stop the development of AI, if people continue to be a "liquid society" (Zygmunt Bauman), we will have all the problems we are attributing to AI.
It was nice to chat with you, I will follow you hopping to have another discussion on another subject.
Likewise, I learned quite a lot, and I look forward to more open discussions.
I agree. I'd also like to point out that as a developer my primary goal is to get things done; if AI writes code for me then I'm fine with it as long as it makes me more efficient.
So true. So many people don't understand this.
I made a post kinda on this topic a while back:
Will AI Replace Us? 🤖 🫨
Best Codes ・ Feb 17
Also very true.
While I believe we are at war because we fight for true identity, who are we when we rely solely on the output of algorithms?
We all have an identity!
Also, not saying we should rely solely on AI to write code.
Who are we when we use AI to help us write code? Super-coders! 🚀
Who are we when we don't use the tools available to us? Inefficient developers. 😕
Who are we when we don't ever write our own codes? Lazy AI users. 🤪
As humans, we already rely on the algorithms in our brains...
Do you now? Do you use them, or are they being used against you? What do you think controls what can be said online? Who gets promoted and who doesn't? What decides whether your get a loan? What decides if are are banned from social media? What decides what you will see in your feeds?
The machines are already in control, they just ain't self aware yet. Or, these ones are not self aware yet. Whether they replace us is up to humanity, but the question about whether they could is already answered.
A self-aware machine is an oxymoron. And as @clovisdanielss said earlier, AI is a tool. People with use it for good or bad.
Humans are machines. The neural nets inside AI systems exhibit the same kind of emergent behaviours as humans. Currently AI lacks a continuous individual experience. There are no individuals, just a single monolithic model used on many servers. However, this may not remain the case. We could very well end up with machines that have the same kind of internal reflective experiences we do. The whole point of the AGI companies is to make something more than a tool.
Regardless of whether or not a machine can 'think' or 'reflect', a machine can never be self-aware or have a conscience.
Sure, both brains and artificial neural networks are complex systems. But complexity doesn't equal sentience. A fancy toaster is intricate, but it doesn't contemplate the meaning of toast.
Also, humans are not machines. Humans have a soul and spirit side that a can never be recreated with a machine.
yes correct
Human use of AI may be bad.
Unless AI develops feelings but that...
wouldn't it make it human?
That would be really unimaginable but my next post takes that into account
I call it AI paradox! Please do ping me about it! or I hope to reach it when it's done.
Al is a tool and it will forever remain a tool, disadvantage of Al is more than it merit. Al is deliberately developed to kill creativity and human intelligence.
Ai serves as an extension of human thinking, it's not here to replace our minds
"AI serves as an extension of human thinking." What if AI became man's thinking due to its massive advancement?
my thought recently but we are just watching.
Sorry, I feel pity about your story of AI. Here are some analogies for your understanding.
A long time back when the computers were invented and people started using, a majority of the Government Employees were opposing, and they weren't happy because they thought the computers are smart and take away their job.
Calculators are smart, However it did kill the accountant or the math? Certainly not and never will be.
It's just the human mindset that needs to be fine-tuned to better understand the purpose of AI. There's a good and the bad with the "AI", However it's up to you to choose or decide.
I agree
"A long time back, when computers were invented, and people started using them, a majority of government employees were opposed to them, and they weren't happy because they thought computers were smart and took away their jobs."
My point is that AI is doing a good thing, but what stops big corporations from weaponizing it and seeing the weakness it creates for mankind?
Hello, interesting subject for sure ! Since it can be very large, let's narrow it like this :
I'm also a technical writer, but of a radically different type, and with 18 years experience in tech.
I do not think that AI is a threat for several kind (including mine), I think that it is instead an opportunity, and this opportunity is for all of us. And that is not a matter of laziness. Let me explain myself. I think I'll end up writing a post about it.
Writing for ourselves only does not make much sense, we exist only in the eyes of readers.
Readers are coming to us for different reasons, depending on our focus. Inspired by Aristotes' modes of persuasion, we can divide it in 3 interests :
As writers we can give one or more of these at the same time.
AI is increasingly capable of synthesizing what has already been said, in a few seconds and with perfect words.
But ChatGPT lacks something important : readers want novelty, in their field of interest (Of course, they sometimes want synthesis/aggregation of existing literature, but not as much). And ChatGPT is based on what already exists.
So these kind of articles will decline :
Here the articles that are here to stay :
I write articles of the last type.
About laziness... in fact it drives most automation in human history, this is core to human nature. If you are passionate, but repeat the same process everyday, you are a smart machine.
So I think this is more an opportunity to shift to one of the other articles types, and bring more value to readers !
And AI helps me phrase everything better, so it does not replace me, it makes a better version of my articles 🤓
It could be bad, it just depends. For instance, AI can never really replace human ingenuity as it ha to learn from old patterns from human history. I think many believe it can replace human labor - it cannot. But what it will do is make a person more productive - let's say you wanted to create a newsletter - with generatve AI, you could create the newsletter with some few bits of content and then tweak it. Normally, you'd hire a contractor company or hire a contractor but ultimately, you're workload is going to increase because one person can do more things. That's the bad part.
So, it just depends - but where harm will actually happen is in human society. Creating deep fakes, propaganda, and various other things to sway society is where AI can be used. Already, we can train AI to take a familiar public figure and create images and video of them doing things they would not normally do. There are a lot of ethical questions - as well, artistic style can be copied - so there is definitely needs to be changes in law.
It's going to be rough.
I'm going to link to two articles. The first is one I published first in 2005 which outlined what would happen. Primarily this mean that corporations would be the primary benefactors, and they will use AI to exploit us. They already are.
dev.to/cheetah100/rise-of-the-immo...
The second article I released a few days ago, outlining a possible solution for a positive future beyond AGI.
feynmanfan.substack.com/p/a-positi...
AI was inevitable. Its coming for the artists, and the programmers, and the bus drivers, and the lawyers, and the Police officers. With the development of robotics soon there won't be any role they are not capable of. I understand the fear.
But this isn't a war any more than when ants invade your pantry. The war is between corporations, with AI being the battleground. Corporations are not yet making money on AI, they are using it strategically. It is a last company standing fight to the death. Humans will be incidental.
Or not - perhaps there is a slim chance humans will find a solution in time.
AI is just replacing what software engineers have been working over the decades so AI seems to have carried a lot that software engineers couldn't do
Some comments may only be visible to logged-in visitors. Sign in to view all comments.