By the mid 2030s, 38% of jobs in the US will have the potential for automation, with positions in manufacturing, retail trade, and construction leading the way. But while 37% of people are worried about their own job falling risk to automation, 73% also say that AI can never replace the human mind – and maybe that’s a good thing.
In spite of the 2030 predictions, one estimate has shown that in 2020 AI will create 2.3 million jobs. Needing programmers to teach AI systems how to better read human emotion and ethics controllers to ensure these same systems are acting in accordance with human values, humanity highly influences these machines. But human beings are fallible creatures; we make mistakes and whether we know it or not, we hold unfair prejudices. It’s true that artificial intelligence has yet to master the soft skills of us humans, but machine learning takes other lessons from us instead.
On paper, AI looks like the perfect tool to reduce and even eliminate human bias, but when we feed an algorithm biased data what comes out is just more bias. Racism in = racism out. One example of this comes from police departments utilizing imperfect risk assessment technology. These risk assessment algorithms use AI work to predict the likelihood of a defendant committing future crimes and allowing police to proceed based off those results. Revealed in a ProPublica study was a darker side to this tech that showed one formula was nearly two times more likely to flag a black defendant than a white one.
In addition, crime predictive software shows police areas where crimes are likely to be committed, but often programs dig themselves into a feedback loop which leads to over-policing of majority-black and low-income neighborhoods.
So there’s a few kinks to work out when it comes to AI; a big one is tackling our own prejudices and disordered thinking. But what happens when AI is used purposely for evil or as a means to control individuals and populations? Smart criminals see a valuable tool in AI, using it to not only identify vulnerable targets, but also to gather data on them for customized phishing scams. Personalized targeting seems to be the name of the game on the dark side of AI, and email scams are only a small piece of the picture.
Today in China there are an estimated 200 million surveillance cameras, exceeding the US four times. Outfitted with AI capabilities, these cameras are used by Chinese police to scan the faces of citizens in order to catch criminals like jaywalkers or drug smugglers. Smart automated camera systems like this help governments collect data on their citizens, with or without their knowledge. In the hands of authoritarian rulers, this could mean hyper targeted propaganda machines and disinformation campaigns.
When we fear AI, perhaps it’s more fear of what humans are capable of, rather than machines. The future of AI and automation is in out hands; let’s be sure to shape it into a force for good. This infographic for more detail on AI and automation, how it’s changing the way we view our jobs, world affairs, and even each other.
Hostage Situation: How the Political Standoff in Washington D.C. Shows the need for Private Sector ADR/OCM in the Modern Era
Tal Maimon Shares Entrepreneurship Lessons Learned From Building a Luxury Rental Empire in L.A.
DJ Diddy: House Party Phenom Turned Global Superstar
Business3 weeks ago
Business Success Interview: With Entrepreneur Tiffany McCormick
Entrepreneurs1 week ago
Interview With Vova Tess: How To Stand Out In The E-commerce Industry
Lifestyle3 weeks ago
Post-Holiday Wellness Tips for the Busy Worker Bee
Marketing3 weeks ago
7 Smart Content Marketing Hacks for Tech Companies
Business4 weeks ago
CROSSNET Is Adding A New Sport to The Physical Education Classroom
Business2 weeks ago
The Significance of Audio Visual Trend for Event Planning
Lifestyle4 weeks ago
Why Is This Dog Going Viral On Instagram?
Business24 hours ago
Continuing the Musical Legacy: Kento Masuda