DEV Community

loading...

Bringing up the Basic Things About AI

elenaal profile image ElenaAl ・4 min read

Contrary to how it may be used in an everyday context, artificial intelligence is not a countable noun, but rather a scientific discipline or a set of problems, concepts, and methods for problem-solving.

While there’s no sole definition universally accepted by professionals, artificial intelligence is properly known in the professional realm as narrow AI which is designed for a machine to handle one task. The long-term goal of researchers is to develop general AI or strong AI, which can handle any intellectual task.

The very concept of AI was cemented by Alan Turing in his 1950 paper “Computing Machinery and Intelligence” in which he not only posed the question of whether machines can think but proposed a machine’s capability of learning from an experience as a child does. Though it faced its challenges, AI development has accelerated since the 90s and is now applied to various fields both in computer science and others such as business, big data, e-commerce, science, machine learning, computer vision, natural language processing, automation, and social media.

The broad abilities of AI coupled with its (often inaccurately skewed) perception generated by pop culture and entertainment media has created many jokes and discussions regarding the future effects of AI on mankind, from memes on social platforms to legitimate debates and surveys on the potential risks of AI affecting the workforce. That'll be brought back up later.

The future of AI’s advancement can’t be disconnected from its present usage. AI is found in multiple avenues in today’s digital age and businesses. For example, the healthcare industry employs AI applications that provide personalized medicine, X-ray medicine, and reminders for healthy nutrition and exercise. AI allows virtual shopping with personalized recommendations for customers in retail and is predicted to improve stock management, marketing, pricing and promotion, customer service, and reduce human error. For instance, IBM developed its Order Management System (OMS) to aid companies to increase channel sales and improve customer service through tracking order entry, inventory management, and after-sales services of orders. Google and IBM use deep learning to have neural networks learn complex patterns, which can be found in image and speech recognition.

The technique people may be most familiar with are advanced algorithms, which analyze data to perform functions such as music or film/show recommendations on Spotify or Netflix. Conversely, AI within automation has been and is cited to place workers, particularly cashiers, out of jobs. This has been witnessed with self-checkout and machine intelligence used for robots in assembly lines within manufacturing industries, especially for automobiles.

Founder of the Future Today Institute and professor of strategic foresight at NYU Amy Webb has stated how previous generations witnessed the effects of the Industrial Revolution, technology’s rapid, exponential growth will “mean that Baby Boomers and the oldest members of Gen X – especially those whose jobs can be replicated by robots – won’t be able to retrain for other kinds of work without a significant investment of time and effort.”

However, this notion has been criticized by professionals such as Alexander Linden, research vice president at Gartner, who argues AI myths stem from a misconception of its intelligence being equivalent to a human’s. AI is being used to supplement human intelligence rather than replace it, and challenges with it still surface such as inaccuracy in solving math problems and biased facial recognition.

Because there’s no other alternative for AI to learn from data, any inaccuracies in the data will be reflected in the results. An AI model must be retrained continuously and receive feedback from end-users to improve accuracy and performance. The advancement of AI is projected to make massive impacts in various aspects for employers and the workforce. In human resources, it’s aiding recruiting professionals to match qualified applicants with job opportunities by identifying open role requirements that match with them using data. Not only do these technologies screen resumes to narrow the candidate pool and account for human bias, but they’re also affordable and use applicant tracking systems that utilize AI.

In finance, higher-level advisory and sales roles that require more human connection will become more available as AI methods take on lower-level activities. Software engineering and data scientist roles will become more abundant, and machine learning executes trades and financial bets. Machine intelligence will further develop safer workplaces, in which robots performing more dangerous tasks allows humans to go into safer careers. This already seen with how robots are used in natural disaster relief sites to help locate any deceased or casualties. AI techniques will also be applied to workplace learning programs in which the curriculum is customized for the users’ knowledge and understanding of the topic, sped or slowed down, or more or less in-depth depending on their performance.

This allows onboarding processes to be more effective in having new hires more prepared to start work in a shorter time frame, which can boost a team’s productivity. One area of profession that’s more debated than others is creativity, particularly with the arts. As stated previously, machine intelligence isn’t quite on par with human intelligence, and human creativity is an example, as it’s more intrinsic on an individual level and thus may be harder for AI networks to replicate. This can be seen by recent developments of AIVA, an AI music composer that creates personalized music, and the AI software named “Benjamin” which was fed screenplay sampling and gradually acquired the ability to imitate a screenplay’s structure (though it was unable to recognize proper names due to them being less predictable).

While these are examples are major breakthroughs for AI and how creative works can be done, there are still shortcomings that reveal visible differences between a human’s creativity and a machine’s. The significance of AI methods cannot be overstated, especially with how much its influenced technological development, how we think of technology, and the underlying similarities and differences between humans and machines. With the internet of things exploding now, the implementation of AI will only further be applied to virtually all walks of life in most, if not all developed countries, and possibly the rest of the world.

Discussion (0)

Forem Open with the Forem app