38 | FEBRUARY 2018 | Claims Magazine |
Last fall provided a steady flow of disasters. There were riots in Vir- ginia over the removal of Confederate monuments, triggering disputes
over monuments across the South. Fires
burned across three Northern California
counties, leaving thousands homeless.
Three hurricanes ravished Houston,
Florida, the Virgin Islands, and Puerto
Rico. FEMA pled “lack of money” while
residents of Puerto Rico and the Virgin
Islands went without water and electricity for more than a month. A small
Montana power company was retained
to restore power in Puerto Rico, while
hundreds of linemen responded to Florida and Texas.
But the biggest disaster, which was
not announced until autumn, occurred
months earlier with the hacking of Equi-fax. Over 145 million people were affected
when their information was stolen, including social security numbers, driver’s
licenses and credit card numbers. It was
months before we found out. Meanwhile,
who was busy purchasing all of that data?
Our government is offering millions
to create an “artificial intelligence” that
would allow the military to read the
minds of our enemies. Meanwhile Am-
azon is offering Echo (Alexa); gismos
that answer questions and provide infor-
mation in our homes. Who knows what
else those devices are capable of doing?
George Orwell’s 1984 is perhaps a few
decades late, but Big Brother is listening
Is artificial intelligence good or
bad for us?
As an admitted “luddite,” I suspect many
of us have become captives of I-things.
We just push a few buttons, hit the “buy”
key, and it is delivered two days later, or
maybe that same day by drone. If you
work in a store as a salesperson, a robot
in a warehouse that selects the desired
item and puts it on a conveyor belt could
replace you. The only ones still employed
will be delivery drivers.
For the claims profession, artificial intelligence (AI) will leave many on the unemployment line. Satellite photos or TV
monitors will film auto wrecks, damaged
houses, the extent of floods and more,
with everything handled by AI computers. Settlements will be by direct deposit.
No need to go and look; if the insurer
needs more information, it will send a
drone to take more photos.
There have been pre-AI tools in the
past, and they’ve hurt as much as they
have helped. One involved voice analysis
lie detectors, where a recorded statement
went through a device that could detect
falsehoods in the speaker’s voice. I researched this when it first came out in the
1970s, and more than two-thirds of state
insurance regulators said they would consider its use an “unfair claims settlement
practice.” The other third hadn’t heard
about it, but suspected the same opinion
would be the case. AI gives no guarantee
that it will always operate in “good faith.”
AI has no faith, good, bad or otherwise.
Good or bad advice?
Last October, an article on PC360 by
attorney Benjamin J. Carroll, suggested
ways to protect insureds “against the dangers of a recorded statement,” warning
that insured’s statements “can jeopardize
your case from day one,” because it is discoverable in litigation. But taking statements, written or recorded, has been the
role of adjusters for centuries. How else
can an adjuster make correct decisions?
Yes, a recorded statement could be res
gestae in litigation, but a well-handled
claim should not end in litigation. If the
insured committed a tort, it is better to
know it early, and settle the claim before
the lawsuit is filed.
If insurers stopped taking statements
— and written, signed ones are better
than recorded — they will spend more
defending lawsuits than they would settling owed claims. Besides, it won’t be
long before Alexa (Echo) gets subpoenaed
to disclose what the insured said at home
that the gismo heard and remembered.
Ken Brownlee, CPCU, (kenbrownlee@
msn.com) is a former adjuster and risk
manager based in Atlanta, Ga. He now
authors and edits claims-adjusting
textbooks. Opinions expressed are the