Actually, to be lazy is bad, not AI. AI is just a tool. People are not ready to use this kind of tool, but this is another discussion.
When you say that AI can destroy our future, is a statement that was said by other people in other contexts, with Google, calculator, and other tools.
Also, I do not believe that AI will do the same job that programmers do today as some CEOs are trying to sell This belief comes from ideology, not science. I do not think that AI will ever replace humanity. It can not be trained by its own output, so AI will always depend on humans to work.
In my perception, there is no such thing as "war against AIs", we don't fight tools, we just use them.
I agree. I'd also like to point out that as a developer my primary goal is to get things done; if AI writes code for me then I'm fine with it as long as it makes me more efficient.
It can not be trained by its own output, so AI will always depend on humans to work.
Also, not saying we should rely solely on AI to write code.
Who are we when we use AI to help us write code? Super-coders! 🚀
Who are we when we don't use the tools available to us? Inefficient developers. 😕
Who are we when we don't ever write our own codes? Lazy AI users. 🤪
As humans, we already rely on the algorithms in our brains...
Peter is the former President of the New Zealand Open Source Society. He is currently working on Business Workflow Automation, and is the core maintainer for Gravity Workflow a GPL workflow engine.
Do you now? Do you use them, or are they being used against you? What do you think controls what can be said online? Who gets promoted and who doesn't? What decides whether your get a loan? What decides if are are banned from social media? What decides what you will see in your feeds?
The machines are already in control, they just ain't self aware yet. Or, these ones are not self aware yet. Whether they replace us is up to humanity, but the question about whether they could is already answered.
Peter is the former President of the New Zealand Open Source Society. He is currently working on Business Workflow Automation, and is the core maintainer for Gravity Workflow a GPL workflow engine.
Humans are machines. The neural nets inside AI systems exhibit the same kind of emergent behaviours as humans. Currently AI lacks a continuous individual experience. There are no individuals, just a single monolithic model used on many servers. However, this may not remain the case. We could very well end up with machines that have the same kind of internal reflective experiences we do. The whole point of the AGI companies is to make something more than a tool.
Regardless of whether or not a machine can 'think' or 'reflect', a machine can never be self-aware or have a conscience.
Sure, both brains and artificial neural networks are complex systems. But complexity doesn't equal sentience. A fancy toaster is intricate, but it doesn't contemplate the meaning of toast.
Also, humans are not machines. Humans have a soul and spirit side that a can never be recreated with a machine.
While I agree with some of your points, we can also say smoking is bad not cigarettes, and my point is Ai is a tool, but humans are getting reliant, and it is affecting how we respond to each other.
Students now rely on AI for all tests and exams, killing imagination as the tool is not the problem No. I am not saying we should do away with AI, but I only ask if it can be regulated to some degree to have humanity retain its true identity.
The problem is the lack of user maturity. As you said, there are students who use AI to pass their tests. But the problem isn't with the tool, the problem is with the lack of morals that people have today (or ever since).
That's why I said "people aren't prepared to use this kind of tool, but that's another discussion".
And this explains why I don't believe we have a war on AI, but a war on morality.
People smoke cigarettes for a lot of reasons, good or bad, and while you are right, AI has a lot of benefits, which I cannot disagree with. Still, I am outlining one because it does offer more suitable, but one mistake can cause irreparable damage, and that is what I am outlining.
Finally, we are indeed in a period of who's AI is better and what stops humanity from taking it to the next level of weaponizing it.
Well, I think that when we talk about weapons we're entering a new topic.
But man uses almost every kind of knowledge he can as a weapon. AI will be no exception. So I still think that the problem is not with AI, but with morality.
Furthermore, I think that anything that is used to do harm has irreparable damage. A simple mockery can cause serious psychological damage to a person. And even if we stop the development of AI, if people continue to be a "liquid society" (Zygmunt Bauman), we will have all the problems we are attributing to AI.
It was nice to chat with you, I will follow you hopping to have another discussion on another subject.
Actually, to be lazy is bad, not AI. AI is just a tool. People are not ready to use this kind of tool, but this is another discussion.
When you say that AI can destroy our future, is a statement that was said by other people in other contexts, with Google, calculator, and other tools.
Also, I do not believe that AI will do the same job that programmers do today as some CEOs are trying to sell This belief comes from ideology, not science. I do not think that AI will ever replace humanity. It can not be trained by its own output, so AI will always depend on humans to work.
In my perception, there is no such thing as "war against AIs", we don't fight tools, we just use them.
I agree. I'd also like to point out that as a developer my primary goal is to get things done; if AI writes code for me then I'm fine with it as long as it makes me more efficient.
So true. So many people don't understand this.
I made a post kinda on this topic a while back:
Will AI Replace Us? 🤖 🫨
Best Codes ・ Feb 17
Also very true.
While I believe we are at war because we fight for true identity, who are we when we rely solely on the output of algorithms?
We all have an identity!
Also, not saying we should rely solely on AI to write code.
Who are we when we use AI to help us write code? Super-coders! 🚀
Who are we when we don't use the tools available to us? Inefficient developers. 😕
Who are we when we don't ever write our own codes? Lazy AI users. 🤪
As humans, we already rely on the algorithms in our brains...
Do you now? Do you use them, or are they being used against you? What do you think controls what can be said online? Who gets promoted and who doesn't? What decides whether your get a loan? What decides if are are banned from social media? What decides what you will see in your feeds?
The machines are already in control, they just ain't self aware yet. Or, these ones are not self aware yet. Whether they replace us is up to humanity, but the question about whether they could is already answered.
A self-aware machine is an oxymoron. And as @clovisdanielss said earlier, AI is a tool. People with use it for good or bad.
Humans are machines. The neural nets inside AI systems exhibit the same kind of emergent behaviours as humans. Currently AI lacks a continuous individual experience. There are no individuals, just a single monolithic model used on many servers. However, this may not remain the case. We could very well end up with machines that have the same kind of internal reflective experiences we do. The whole point of the AGI companies is to make something more than a tool.
Regardless of whether or not a machine can 'think' or 'reflect', a machine can never be self-aware or have a conscience.
Sure, both brains and artificial neural networks are complex systems. But complexity doesn't equal sentience. A fancy toaster is intricate, but it doesn't contemplate the meaning of toast.
Also, humans are not machines. Humans have a soul and spirit side that a can never be recreated with a machine.
While I agree with some of your points, we can also say smoking is bad not cigarettes, and my point is Ai is a tool, but humans are getting reliant, and it is affecting how we respond to each other.
Students now rely on AI for all tests and exams, killing imagination as the tool is not the problem No. I am not saying we should do away with AI, but I only ask if it can be regulated to some degree to have humanity retain its true identity.
In my opinion, you made a false assumption.
You said that my analogy is equivalent to the analogy "Smoking is bad for you, cigarettes are not". This is a fallacy.
Cigarettes have only one purpose, which is to smoke.
AI, as a tool, has many objectives, both bad and good.
That's why you can't say that my argument is the same thing as saying such a thing.
Furthermore, the fact that people misuse AI is not AI's fault (because AI has many good things to offer, unlike cigarettes).
The problem is the lack of user maturity. As you said, there are students who use AI to pass their tests. But the problem isn't with the tool, the problem is with the lack of morals that people have today (or ever since).
That's why I said "people aren't prepared to use this kind of tool, but that's another discussion".
And this explains why I don't believe we have a war on AI, but a war on morality.
People smoke cigarettes for a lot of reasons, good or bad, and while you are right, AI has a lot of benefits, which I cannot disagree with. Still, I am outlining one because it does offer more suitable, but one mistake can cause irreparable damage, and that is what I am outlining.
Finally, we are indeed in a period of who's AI is better and what stops humanity from taking it to the next level of weaponizing it.
Well, I think that when we talk about weapons we're entering a new topic.
But man uses almost every kind of knowledge he can as a weapon. AI will be no exception. So I still think that the problem is not with AI, but with morality.
Furthermore, I think that anything that is used to do harm has irreparable damage. A simple mockery can cause serious psychological damage to a person. And even if we stop the development of AI, if people continue to be a "liquid society" (Zygmunt Bauman), we will have all the problems we are attributing to AI.
It was nice to chat with you, I will follow you hopping to have another discussion on another subject.
Likewise, I learned quite a lot, and I look forward to more open discussions.
yes correct