I'm not overly concerned about the Terminator type scenario; but I don't think it's wrong to raise concerns about AI. There are already weaponised drones in military use. How long before someone decides to run those with AI? Then you can run into Ed-209 type scenarios (See Robocop - 1987) where bad AI leads to unexpected but predictable and undesirable results. Whilst that doesn't threaten the future of mankind it could still lead to people getting killed.
It's naive and dangerous to underestimate just how badly wrong 'AI' can go; especially if you're in the business of working with it. In fact if you work in the field of AI it is your responsibility to be aware of and mitigate known issues - e.g. with inherently biased datasets that lead to racists/mysoginistic outcomes; of which there are already many examples.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
I'm not overly concerned about the Terminator type scenario; but I don't think it's wrong to raise concerns about AI. There are already weaponised drones in military use. How long before someone decides to run those with AI? Then you can run into Ed-209 type scenarios (See Robocop - 1987) where bad AI leads to unexpected but predictable and undesirable results. Whilst that doesn't threaten the future of mankind it could still lead to people getting killed.
It's naive and dangerous to underestimate just how badly wrong 'AI' can go; especially if you're in the business of working with it. In fact if you work in the field of AI it is your responsibility to be aware of and mitigate known issues - e.g. with inherently biased datasets that lead to racists/mysoginistic outcomes; of which there are already many examples.