Data, the affable android of Star Trek fame, had a lesser-known brother named Lore. In the fictional universe, they were identical on the outside but very different on the inside. They were incalculably intelligent, yet as different as good and evil.
Similarly in the real world, artificial intelligence has great potential to improve the legal condition. But this blog is about AI's dark side.
Good v. Evil
Smart technology has quickly risen to celebrity status, as law firms and companies have welcomed software robots into their offices with fan fare. They are taking over the mundane tasks of lawyering, and changing the way attorneys practice law.
However, the flip-side of beneficial technology is also on the rise. Hackers have humbled countries, global companies, and even large law firms.
As the future unfolds, it appears AI will have a larger role in this duality of progress. AI is adding a new layer of cyber risk to businesses -- especially those in the information business.
"As with any innovative new technology, there are positive and negative implications," according to Harvard Business Review. "Businesses must recognize that a technology powerful enough to benefit them is equally capable of hurting them."
You Don't Tay?
In the Harvard article, the authors cite the role of AI tools in high-profile hacks at Yahoo, Ashley Madison, and the Democratic National Committee. But AI worked largely in the background there.
At Microsoft, a smart-alec robot came out in very visible way. The company launched TayTweets to engage millennials with artificial intelligence on Twitter, but it backfired.
"Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation," Microsoft said before the launch. "The more you chat with Tay the smarter she gets."
However, the chatbot had been hacked and programmed to make racist, inflammatory, and political statements. Microsoft promptly put Tay to sleep.
With law firms and in-house legal departments, it's all about data -- client files, email communications, confidential information, etc. If AI is handling such data, it is vulnerable to compromise.
"With humans, a security breach can often be localized back to the source and sealed," the Harvard authors say. "With cognitive and AI breaches, the damage can become massive in seconds."