After watching the Netflix “Social Dilemma” documentary, I figured out what has been the problem with me as of lately. Facebook has been using artificial intelligence to try and keep me on my phone as a “user” and “consumer” for profit. Their algorithm uses learning technology to make suggestions based on a machine’s interpretation of what I find interesting.

Because of this, all information I get on Google (they do the exact same thing), Facebook and all other social media platforms, their recommendations are only one sided. I hardly ever get information from the opposite side of the camp. This means there is a divide between political views based on a machine’s interpretation for profit.

In layman’s term, everything I read is one sided. Google will give a liberal certain links that a moderate or conservative will not see. The three camps will only see what has been given to them via a machine learning system that was only designed for advertising. This means all sides cannot agree politically or socially because the “facts” we look up are chosen by machines for our bias.

This technology is dangerous. It has split America in half, whether by design or not, and will continue to harm the fabric of our social harmony. For every problem, there are hundreds of writers with a viewpoint. Every angle probably is being written about but we only see one side. The side our data history agrees on.

This is why we get upset. This is why we rage. Because there is no common sense to bring to an argument (different viewpoints) so we feel empowered in our ideas because everything we find or lookup tailors to match them perfectly. And when someone debates your view, you feel threatened and dumbfounded because you naturally think everyone reads what you do.

I was drained after fighting for what I thought was the good fight. Trying to educate others on everything I have experienced, researched and analyzed. I felt that I had enough information and followers (readers) who agreed with my logic to make a stand and post what I thought was the truth. When in reality, a lot of the subject matter that I thought was relevant, turned out to be machine learning suggestions.

My problem is laziness. Assuming Google and Facebook will feed me correct news to post and comment about instead of researching information myself. A machine doesn’t understand empathy or apathy. The information given is what it thinks will keep you online longer which supports its advertising business plan.

Don’t get me wrong, there are dozens of stories I have researched and fact checked before I published, but those stories are not given to me via suggestion. That’s what makes those stories unique and interesting. Now when I write or post a story or headline, I will do it with the thought of media hype, bias, algorithm placement and no assumptions that everyone is on board.

Perhaps when we view articles in this light and are triggered or when we find ourselves pumped up with agreement (ego), maybe we could stop and research the counter argument first before we post an opinion that will continue to keep us divided?

James Carner