DEV Community

ajaxStardust
ajaxStardust

Posted on

Ethics and AI Model Training

I'm focused on the impact of ethics on efficacy or quality of utility in AI problem solving; potential societal disruption where poorly trained AI may potentially be used.

As the SaaS space (incl .gov) is likely to be an impacted area, I believe it's critical to actively monitor when, where, why, and how AI is in use (e.g. assist with a student loan application, financial advisor in banking app, HHS beneficiaries, etc).

Who Makes the Rules?

Should a consumer be aware if, for example, the agency receiving the first application you'll submit after being awarded your Doctorate in crotch scratching, or whatever it was. Do you care that their "HR" dept consists of ChatGPT reading the resume that essentially took you 30 years to build?

I speak from experience when I advise you that the AI "tasking" "LLM Training Expert" agencies are concerned about quantity, not quality. What does that mean? The AI is trained with shoddy data. It's a crime. I mean, it definitively is a crime.

Taking action when we see unethical practices in these early stages of AI-- i believe-- is critically important, yet no one at the FTC, the FCC, etc knows anything about how to direct a caller who wishes to report unethical practices in AI training. How do you feel about that? Do the crime now! No one is watching. No one even knows which Federal Agency handles the circumstance, or where to direct a call. Go nuts!
Right?

If you (or an associate) have a greater than average interest in the future of AI, please consider telling me about yourself.

I have a link at Neutility dot Life where a Google Forms document can be used to share your interest!

Best regards!
J Sabarese
@ajaxstardust@ (vivaldi.social)

Top comments (1)

Collapse
 
ajaxstardust profile image
ajaxStardust • Edited