Connect with us

Technology

Good Versus Evil: The Future Of AI

Brian Wallace

Published

on

7,043

By the mid 2030s, 38% of jobs in the US will have the potential for automation, with positions in manufacturing, retail trade, and construction leading the way. But while 37% of people are worried about their own job falling risk to automation, 73% also say that AI can never replace the human mind – and maybe that’s a good thing.

In spite of the 2030 predictions, one estimate has shown that in 2020 AI will create 2.3 million jobs. Needing programmers to teach AI systems how to better read human emotion and ethics controllers to ensure these same systems are acting in accordance with human values, humanity highly influences these machines. But human beings are fallible creatures; we make mistakes and whether we know it or not, we hold unfair prejudices. It’s true that artificial intelligence has yet to master the soft skills of us humans, but machine learning takes other lessons from us instead.

On paper, AI looks like the perfect tool to reduce and even eliminate human bias, but when we feed an algorithm biased data what comes out is just more bias. Racism in = racism out. One example of this comes from police departments utilizing imperfect risk assessment technology. These risk assessment algorithms use AI work to predict the likelihood of a defendant committing future crimes and allowing police to proceed based off those results. Revealed in a ProPublica study was a darker side to this tech that showed one formula was nearly two times more likely to flag a black defendant than a white one.

In addition, crime predictive software shows police areas where crimes are likely to be committed, but often programs dig themselves into a feedback loop which leads to over-policing of majority-black and low-income neighborhoods.

So there’s a few kinks to work out when it comes to AI; a big one is tackling our own prejudices and disordered thinking. But what happens when AI is used purposely for evil or as a means to control individuals and populations? Smart criminals see a valuable tool in AI, using it to not only identify vulnerable targets, but also to gather data on them for customized phishing scams. Personalized targeting seems to be the name of the game on the dark side of AI, and email scams are only a small piece of the picture.

Today in China there are an estimated 200 million surveillance cameras, exceeding the US four times. Outfitted with AI capabilities, these cameras are used by Chinese police to scan the faces of citizens in order to catch criminals like jaywalkers or drug smugglers. Smart automated camera systems like this help governments collect data on their citizens, with or without their knowledge. In the hands of authoritarian rulers, this could mean hyper targeted propaganda machines and disinformation campaigns.

When we fear AI, perhaps it’s more fear of what humans are capable of, rather than machines. The future of AI and automation is in out hands; let’s be sure to shape it into a force for good. This infographic for more detail on AI and automation, how it’s changing the way we view our jobs, world affairs, and even each other.

Brian Wallace is the Founder and President of NowSourcing, an industry leading infographic design agency , based in Louisville, KY and Cincinnati, OH which works with companies that range from startups to Fortune 500s. Brian also runs #LinkedInLocal events nationwide, hosts the Next Action Podcast, and has been named a Google Small Business Advisor for 2016-2018. Follow Brian Wallace on LinkedIn as well as Twitter.

Continue Reading
Advertisement

Facebook

Trending