AI, blockchain, and just an overall rise in technologies across the planet are changing the way traditional industries are doing business. The wide world of auto insurance is no exception, with disruptive technology from insurtech propelling the industry forward.
From apps that allow agents to quickly process applications on the go, to AI being used to navigate the vast amount data involved in insurance forms, budding technologies are at the forefront of changing how consumers interact with the services that protect them against disasters and other life-changing events.
The AI revolution
“Europe’s leading digital technology conference”
It’s happening, Join 15k digital minds to shape what’s next for your business
In an industry that is built on data, it is surprising to see how long it is taking AI to find its way into the field. Regardless of using it for anything big, AI can just help with the tedious parts surrounding the paperwork and claims verification process.
According to Experian, data that is manually inputted accounts for upwards of 55% of the errors surrounding customer data, and another 32% due to typos. AI and even simple automation would help alleviate many of these errors to speed up the process for customers and lower the time needed for insurance professionals.
AI can also be used to speed up the claim process with auto-validation based on baked in sets of rules for the AI to analyze. There are companies working on this, with one, Lemonade, boasting that an insurance claim was paid in a mere three seconds using their service. Now,
The mobile and on-demand economy
Our smartphones are now as much a part of us as the shoes on our feet so it only makes sense that advancements in insurtech are found throughout the mobile space.
Insurance marketplace, EverQuote, which claims to save customers an average of $531/year on auto insurance, released an app that monitors users’ driving habits. Not only did data show that users improved their driving habits while the app was in use, but the concept of live-monitoring your driving habits is being used to help pinpoint rates and identify potential risks.
Liberty Mutual, Allstate, and State Farm all have variations of drive-monitoring apps that can be used to improve rates and give a better look at driving habits. This helps eliminate the antiquated need for identifying drivers based on basic things like age and gender and focuses on what is important – driving ability.
The growing on-demand economy is also affected by the rise of mobile technologies in the insurance space. With services like Uber and Lyft using a driver’s personal car, there are times when extra insurance could be beneficial. The problem is traditional insurance doesn’t work like that.
That’s where companies like Trov, founded in 2012, and Slice Labs, founded in 2015, come into play. These types of companies offer auto (and renters insurance in the case of Slice) only when you need it. Basically, the associated apps will keep up with the distance driven and costs for the temporary insurance policy.
While services like those above are targeted on the on-demand economy, it is not a leap to say that even those not in that economy would be interested in “part-time” insurance.
Blockchain in insurance
It would be impossible to go an entire article about budding technologies and not mention blockchain. The decentralized network of computers and smart contracts can be implemented into almost any existing industry to offer improvements, and the insurance industry is no exception.
With smart contracts and KYC (know-your-customer) data being transferred via the blockchain, insurance companies could alleviate many of the issues regarding the time it takes to verify policies for not only car insurance, but almost any type of insurance policy. While this is still a very new field, the Hong Kong Federation of Insurers (HKFI) is already working on a blockchain-based platform to address these issues.
Then you have companies like Proxeus, which, while not involved in the insurance game, offers a service that allows companies to integrate blockchain tech into existing workflows. Proxeus, with help from IBM, recently showed that it was possible to register a company in Switzerland in record-breaking time, all but eliminating many of the timely processes traditionally involved through the use of smart contracts and decentralized apps, typically referred to as dApps.
Registering a business is definitely not the same thing as signing up for car insurance, but the fact remains that the tech can change traditional processes and workflows by incorporating budding blockchain tech.
Technology is changing many facets of our lives and simplifying various aspects of our daily grind. Insurance has been one of the industries that have taken a bit longer to adapt to these new innovations, but AI, blockchain, and a general increase in the advantages of mobile applications is helping bring the sphere into the 21st century.
This post is part of our contributor series. The views expressed are the author’s own and not necessarily shared by TNW.
They call Amazon the everything store—and Tuesday, the world learned about one of its lesser-known but provocative products. Police departments pay the company to use facial-recognition technology Amazon says can “identify persons of interest against a collection of millions of faces in real-time.”
More than two dozen nonprofits wrote to Amazon CEO Jeff Bezos to ask that he stop selling the technology to police, after the ACLU of Northern California revealed documents to shine light on the sales. The letter argues that the technology will inevitably be misused, accusing the company of providing “a powerful surveillance system readily available to violate rights and target communities of color.”
The revelation highlights a key question: What laws or regulations govern police use of the facial-recognition technology? The answer: more or less none.
State and federal laws generally leave police departments free to do things like search video or images collected from public cameras for particular faces, for example. Cities and local departments can set their own policies and guidelines, but even some early adopters of the technology haven’t done so.
Documents released by the ACLU show that the city of Orlando, Florida worked with Amazon to build a system that detects “persons of interest” in real-time using eight public-security cameras. “Since this is a pilot program, a policy has not been written,” a city spokesperson said, when asked whether there are formal guidelines around the system’s use.
“This is a perfect example of technology outpacing the law,” says Jennifer Lynch, senior staff attorney at the Electronic Frontier Foundation. “There are no rules.”
Amazon is not the only company operating in this wide open space. Massachusetts based MorphoTrust provides facial-recognition technology to the FBI, and also markets it to police departments. Detroit police bought similar technology from South Carolina’s Data Works Plus, for a project that looks for violent offenders in footage from gas stations.
The documents released Tuesday provide details about how Orlando, and the sheriff’s department of Oregon’s Washington County use Amazon’s facial recognition technology. Both had previously provided testimonials about the technology for the company’s cloud division.
Orlando got free consulting from Amazon to build out its project, the documents show. In a prior testimonial, Orlando’s chief of police John Mina said that the system could improve public safety and “offer operational efficiency opportunities.” However a city spokesperson told WIRED that “this is very early on and we don’t have data to support that it does or does not work.” The system hasn’t yet been used in investigations, or on imagery of members of the public.
Washington County uses Amazon’s technology to help officers search a database of 300,000 mugshots, using either a desktop computer or a specially built mobile application. Documents obtained by the ACLU also show county employees raising concerns about the security of placing mugshots into Amazon’s cloud storage, and the project being perceived as “the government getting in bed with big data.”
There’s no mention of big data in the US Constitution. It doesn’t provide much protection against facial recognition either, says Jane Bambauer, a law professor at the University of Arizona. Surveillance technology like wiretaps are covered by the Fourth Amendment protections against search and seizure, but most police interest in facial recognition is in applying it to imagery gathered lawfully in public, or to mugshots.
State laws don’t generally have much to say about police use of facial recognition, either. Illinois and Texas are unusual in having biometric privacy laws that can require companies to obtain permission before collecting and sharing data such as fingerprints and facial data, but make exceptions for law enforcement. Lynch of EFF says hearings by the House Oversight Committee last year showed some bipartisan interest in setting limits on law enforcement use of the technology, but the energy dissipated after committee chair Jason Chaffetz resigned last May.
Nicole Ozer, technology and civil liberties director at the ACLU of Northern California, says the best hope for regulating facial recognition for now is pressuring companies like Amazon, police departments, and local communities to set their own limits on use of the technology. “The law moves slowly, but a lot needs to happen here now that this dangerous surveillance is being rolled out,” she says. She says Amazon should stop providing the technology to law enforcement altogether. Police departments should set firm rules in consultation with their communities, she says. In a statement, Amazon said all its customers are bound by terms requiring they comply with the law and “be responsible.” The company does not have a specific terms of service for law enforcement customers.
Some cities have moved to limit use of surveillance. Berkeley, California, recently approved an ordinance requiring certain transparency and consultation steps when procuring or using surveillance technology, including facial recognition. The neighboring city of Oakland recently passed its own law to place oversight on local use of surveillance technology.
Washington County has drawn up guidelines for its use of facial recognition, which the department provided to WIRED. They include a requirement that officers obtain a person’s permission before taking a photo to check their identity, and that officers receive training on appropriate use of the technology before getting access to it. The guidelines also state that facial recognition may be used as investigative tool on “suspects caught on camera.” Jeff Talbot, the deputy spokesperson for the Washington County Sheriff’s Office, said the department is not using the system for “public surveillance, mass surveillance, or for real-time surveillance.”
Ozer and others would like to see more detailed rules and disclosures. They’re worried about evidence that facial recognition and analysis algorithms have been found to be less accurate for non-white faces, and not accurate at all in law enforcement situations. The FBI disclosed in 2017 that its chosen facial-recognition system only had an 85 percent chance of identifying a person within its 50 best guesses from a larger database. A system tested by South Wales Police in the UK during a soccer match last year was only 8 percent accurate.
Lynch of EFF says she believes police departments should disclose accuracy figures for their facial recognition systems, including how they perform on different ethnic groups. She also says that despite the technology’s largely unexamined adoption by local police forces, there’s reason to believe today’s free for all won’t last.
Consider the Stingray devices that many police departments began to quietly use to collect data from cellphones. Amid pressure from citizens, civic society groups, and judges, the Department of Justice and many local departments changed their policies. Some states, such as California, passed laws to protect location information. Lynch believes there could soon be a similar pushback on facial recognition. “I think there is hope,” she says.
Louise Matsakis contributed to this article.
More Great WIRED Stories
- PHOTO ESSAY: Hunting for Frankenstein amid Switzerland’s melting glaciers and nuclear bunkers
- How a gamers’ game like Fortnite became a crossover phenomenon
- Everything you wanted to know about soft, hard, and nonmurderous robots
- It takes just one autonomous car to prevent phantom traffic jams
- Here’s the true history of Yanny and Laurel—and why it means we’ll all die alone
PARIS — The Viva Technology conference in Paris is shaping up for its biggest edition yet, with French President Emmanuel Macron set to meet with Facebook founder Mark Zuckerberg and more than a dozen technology bosses in the run-up to the annual event.
Macron has promised to question Zuckerberg, who is also due to testify before members of the European Parliament on Tuesday, on issues like tax and data privacy.
The French government’s Tech for Good Summit is scheduled to take place on Wednesday, the day before Macron attends VivaTech, where he will be welcomed by Bernard Arnault, chairman and chief executive officer of French luxury conglomerate LVMH Moët Hennessy Louis Vuitton, who is cohosting the event.
Arnault, France’s wealthiest man, has gathered a jury of industry heavy-hitters for the second edition of the LVMH Innovation Award, due to be handed out to one of the 30 preselected contestants who will present their ideas at the LVMH Luxury Lab during the third edition of the conference, set to run from Thursday to Saturday.
Joining Arnault, the jury’s chairman, will be Ginni Rometty, ceo of IBM; José Neves, founder and ceo of Farfetch; Richard Liu, founder, chairman and ceo of JD.com; Peggy Johnson, executive vice president of business development at Microsoft, and Jimmy Iovine, cofounder of Beats Electronics, WWD has learned exclusively.
The remaining jury members are Ian Rogers, chief digital officer at LVMH; Alexandre Arnault, ceo of German luggage maker Rimowa, and music producer Alex da Kid, a spokesman for LVMH said.
More than 820 start-ups from 58 countries applied for the award, representing a rise of almost 60 percent versus last year. They are from China, Canada, Israel, South Korea, the United Kingdom, the United States and France.
Among them are AlgiKnit, which creates textiles from recyclable biopolymers; Arylla, which makes an invisible ink that can be used to verify the authenticity of luxury goods; Bobbli, which allows viewers to purchase products seen in a movie or on television, and Foko Retail, an online application to manage physical stores.
In addition, 22 LVMH-owned brands will be present on the group’s stand.
Bernard Arnault will be among the several dozen speakers at Viva Technology, alongside Facebook’s Zuckerberg; Rometty; Jean-Paul Agon, chairman and ceo of L’Oréal; Satya Nadella, ceo of Microsoft; Dara Khosrowshahi, ceo of Uber, and Marc Pritchard, chief brand officer of Procter Gamble, among others.
Meanwhile, Rogers and Andrew Wu, LVMH group director for China, will take part in a CEO Forum talk about LVMH’s collaboration with start-ups in China.
The conference is organized by advertising and public relations company Publicis Groupe and French media group Les Échos, which belongs to LVMH and publishes the financial daily of the same name.
Said Mehrotra, the number of AI-capable servers in the industry are going from a “tiny sliver” of all servers to nearly half of all shipments by 2025. Those demand trends are leading to what Mehrotra called a “virtuous cycle” that is driving memory-chip usage.
He predicted the data center market for both chips combined will rise from $29 billion annually last year to $62 billion by 2021. In mobile devices, the value of the memory-chip market will go from $45 billion to $54 billion by then. And the other two important markets, memory chips in cars, and the Internet of Things, would surge from $2.5 billion annually to $5.9 billion annually, and from $9 billion to $16 billion, respectively, as higher levels of autonomy in cars, and the explosion of connected devices, consume ever more memory, and memory chips become a larger component of the “bill of materials” of a device.
One juicy example: by 2021, most “premium” smartphones will have 12 gigabytes of DRAM and one terabyte of NAND flash.
Mehrotra’s head of technology, Scott DeBoer, said the company today faces the need to be “very uniquely thoughtful” about how technology is adapted to individual markets, in contrast to the generic needs of the past.
“These things have to be comprehended in very early stage of the development,” he said, referring to things such as reliability in data centers versus power consumption budgets in mobile devices.
DeBoer said the company was gaining the “confidence” to work on more and more forms of DRAM and things that are disruptive. That includes the current “1Y” technology and three forthcoming flavors, “1Z,” “1-alpha,” and “1-beta.”
DeBoer pointed out the many ways the company has improved its manufacturing. In five years, for example, the company’s NAND flash has gone from a 65% “gap” in the density of chips relative to competitors in 2013 to a 15% lead on those companies.
Among the more tantalizing tidbits: Micron is going to be introducing a product based on the “3-D Xpoint” technology it has developed with Intel (INTC), in the latter half of next year. So far, Intel is the only one that has shipped product based on 3-D Xpoint, which it brands “Optane.” (More on Intel here.)
But Mehrotra and his other executives declined to offer details on those products, an interesting mystery punctuating the talk.
One product that was introduced was the world’s first “four-bits-per-cell” solid-state drive.
“To some extent you have to be a NAND physicist to understand it and think it’s great,” said DeBoer, to general laughter from the audience.
Micron turns 40 this year, and Mehrotra pointed out the company has filed over 40,000 patents in that time, over 1,000 patents a year.
Micron has “the best technology portfolio in the world, bar none,” said Mehrotra, meaning, of all chip companies in any areas of semiconductors.
He then showed a video of testimonials from various customers and partners, among them, Nvidia (NVDA) CEO JensenHuang, who spoke about how the companies collaborate on what the memory-chip design needs to be to address GPU needs, and how the companies go to market. “The collaboration is really really deep.”
The big exciting news for the financial types came at the end of the day, when Micron Chief Financial Officer David Zinsner said the company will undertake a $10 billion share repurchase starting in the fiscal year that begins in September. That announcement drew gasps from the audience, and Zinsner joked that he had been tempted to rip off his mic and walk off stage, which drew big laughs from the audience.
Micron shares today closed up $2.09, or almost 4%, at $55.48.
Update: The buyback really did the trick, boosting shares by $2.12, or almost 4%, to $57.60, in after-hours trading
See also: Part two of today’s meeting coverage.
Sign up to Review Preview, a new daily email from Barron’s. Every evening we’ll review the news that moved markets during the day and look ahead to what it means for your portfolio in the morning.
Marseille and San Francisco, May 17. Citizen astronomy is coming to Vivatech! After its record-breaking crowdfunding campaign, the largest technology campaign in French history and one of the biggest ever seen in Europe, Unistellar will be at Vivatech 2018 to display its eVscope, a revolutionary digital telescope that is 100 times more powerful than conventional telescopes.
Founded in 2015 in Marseille and with offices in San Francisco, Unistellar is on a mission: allow everyone, everywhere—whether they’re downtown or in the remote countryside—to look deeply into the night sky, feel the sense of wonder this view has always created in human beings and share it with friends and relatives. “The eVscope is much more than a very friendly telescope that turns amateur astronomy into popular astronomy. Thanks to a radically new hybrid digital/optical system, it gives viewers access to amazing views of the universe normally accessible only from telescopes many times its size,” explained Antonin Borot, Chief of Optical Engineering at Unistellar.
Unistellar has invented and deployed a proprietary technology, relying on a low-light sensor, that accumulates light through a series of short exposures, making it possible to view live through its eyepiece the beautiful colors and shapes of galaxies and nebulae usually invisible through a regular or even a high-end telescope.
The eVscope is a user-friendly device that can recognize any astronomical object, guide people to them, and is controllable from a smartphone. There is more, thanks to Unistellar’s partnership with SETI Institute, it is also a powerful scientific instrument: users can join observation campaigns run by leading astronomers to view stunning events like supernovae, comets or asteroid occultations, while they generate valuable new data for scientists around the world.
“We recently demonstrated the eVscope’s potential by observing and recording, from a location near Marseille, a stellar occultation by an asteroid,” said Franck Marchis, Chief Scientific Officer at Unistellar and Senior Astronomer at SETI Institute. “By combining our observation with two others made by amateur astronomers, we were able to estimate the shape and size of the main-belt asteroid (175) Andromache. This is an incredible first scientific result for our small but very mighty telescope.”
Last November, Unistellar made the decision to crowdfund its product on Kickstarter. The campaign was a record-shattering success, and raised a record $2.2 million from more than 2,000 backers. The eVscope is now also the biggest technology crowdfunded project in France and one of the biggest in Europe. A success that still is growing: “after multiple requests from people late to this campaign, we recently decided to reopen a new crowdfunding campaign” explained Laurent Marfisi, CEO of Unistellar.
Thanks to the support of Région Provence-Alpes-Côte d’Azur, Unistellar will attend Viva Technologies 2018. Come and meet us at booth J05-028. We’re looking forward to talking to you about our unique enhanced-vision technology and how it creates a whole new kind of popular astronomy.
Created in 2015, Unistellar is a French start-up developing the eVscope, a revolutionary digital telescope 100 times more powerful than conventional telescopes. Compact and user-friendly, the eVscope makes it possible to observe deep sky objects from everywhere and to participate into scientific observation campaigns, in partnership with the SETI Institute.
The eVscope received a CES Innovation Award in 2018, in the Tech for a Better World category. In November 2017, it raised 2.2M USD through a Kickstarter campaign, making it one of the biggest technology crowdfunded projects in Europe.
Psychologists obediently follow the same rules as other scientists. But their efforts haven’t yielded equivalent progress. In fact, in the last decade, psychologists have realised that some of their most intriguing findings are not reliable – when other researchers try to repeat the same study, they don’t find the same results.
Many people refer to this as a replication crisis in the field. But what is to blame for this problem and what can we do about it? In a new review, published in the General Review of Psychology, we describe a promising technological solution.
Most psychologists are convinced that the widespread misuse of statistics and poor research integrity – a euphemism for cheating – are ultimately to blame for the crisis. So, removing bad practices should solve the problem. Yet this often doesn’t work – seriously undermining confidence in the reliability of psychology.
We are convinced that tightening the regulation of research won’t fix the crisis. Instead, we need to go back over the past century to a crucial wrong turn in psychology that happened because of a limit in the technology of the time.
In the late 19th century, the American philosopher William James argued that the essence of psychology is hidden purpose. He famously described the purposeful behaviour of a frog held under water in an inverted glass. Despite attempts by the experimenter to stop it, the frog eventually found its way up to the air in surprising ways. James argued that the frog’s purpose was to get to the surface and it did this in different ways each time.
But it isn’t easy to test hidden purpose reliably in humans. Most research in psychology relies on getting large numbers of participants to provide data. The researchers then measure correlations, or the effects of experimental manipulations, in these groups. This research began before the time of the modern computer, when the researcher could simply present a “stimulus” to a participant and measure the response. And this approach persists today, making up the vast majority of studies in psychology.
Unfortunately, it’s not reliable. One recent series of “stimulus-response” studies were set up so that participants could respond to an image on a screen by either pushing or pulling a joystick. They were presented with either “negative” or “positive” images or words (stimuli). The researchers proposed that viewing a negative stimulus (such as an angry face) unconsciously activates the muscles that extend the arm. This is because that’s how we push something away if we are faced with it in real life. The initial studies supported this account – participants were quicker to respond to negative stimuli when the response was to push the lever away from them than when it was to pull it.
However, a huge review of over 68 attempts to test for this effect in more than 3,000 participants showed that this effect was not consistently repeated. Importantly, in tasks that were designed so that pushing the lever actually made the stimulus get closer, the opposite effect was found – negative stimuli were now associated with the response of pulling the lever.
The authors concluded that participants were actually controlling their perceived distance from the negative image through whatever action they could (just like James’s frog). But the traditional experimental design was simply not set up to test this.
In our recent article, we bring together the advances that researchers have made using an approach known as perceptual control theory. It continues where James left off, assuming the hidden purposes of living things, but it tests for them using a sophisticated approach. It typically relies on computing capacity to measure people’s activities in virtual environments, and to build a computer model of the psychological processes within the individual.
The technique is based on creating a situation where the participant can pursue a goal, for example controlling the distance from a negative image on a screen using a joystick. It then measures every change that goes on in the situation continuously (for example by making real-time videos) – including disturbances that get in the way of the person’s goal, such as changes in the experimental set up or physical obstacles. All this data is then used to build a computer model of how each participant is pursuing their goal.
You can then repeat the situation, using the computer model to predict what the individual will do, and constantly compare with what they are doing. If the model fails, you improve it until you’ve got a good match – creating a “personal profile” for each individual. This can then be tested for replication over repeated sessions. You can also combine data for many individuals to look at mean effects to work out what goals are generally relevant to a given situation.
Replication … at last?
The result of this approach is typically a robust model of the psychological processes involved in an activity – such as tracking a target on a screen. These models have been shown to repeat a high level of accuracy over and over again, typically showing correlations over 0.98 – a perfect correlation is 1.0. A correlation shows the association between two different variables (for example stimuli and response). This is currently virtually unheard of in traditional psychology research, where correlations of as low as 0.3 are regarded as “statistically significant”.
You might think that modelling of this kind is only suitable for simple tasks, but a similar approach has been applied to many areas, including food competition in animals. This used frame-by-frame video analysis to show that a rat holding food in its mouth continually reorients its body to maximise the distance between its food and a competing animal’s mouth.
The same assumptions have informed treatments of spider phobia, helping to build tasks in which the participant can control their distance from a spider in a virtual corridor. Facing fears in this way is a treatment known as exposure therapy. However, it was previously unknown what level of control over the exposure works best. The study using this technique showed for the first time that people who have a higher degree of control over the exposure actually ended up avoiding spiders less after the experiment than those who had little control.
There are areas where it will be more challenging to use this technique – such as complex tasks involving memory and reasoning. Nevertheless, it could be easily applied in many areas.
The replication crisis has been the wake up call psychological science needed to think differently – now it is time to embrace the advances in technology that allow us to improve the field.
There’s no doubt that the job market is changing. Gone are the days of learning how to do one job, sticking with it for 40 years and retiring with a desirable pension. In 2016, according to the U.S. Bureau of Labor Statistics, workers hold a job for an average of 4.2 years before moving on. And 35 percent of workplace skills in all industries are expected to change by 2020, according to the World Economic Forum.
New technological developments continue to make certain roles in the workplace obsolete. Because these innovations are inevitable, the conversation is turning to training workers so that their skills remain relevant. At the 2016 World Economic Forum, a key takeaway was that learning environments would need to change — and advances in technology could hold the key.
Workers have recognized the need to remain up-to-date, and according to Pew Research Center’s “The State of American Jobs” survey, 87 percent of employees are looking to learn new skills and hone the ones they have in order to remain competitive in the workplace. With technology disrupting that process, the good news is that it’s going to get easier to keep up with those steep learning curves. Here are three important job-training trends:
1.The Death of Workplace Classrooms
Kevin Young, head of SkillSoft EMEA, explains that the benefits of seminars and in-person teaching opportunities don’t justify the cost: “To put it simply, sending half your workforce across the country to attend a training course just does not make business sense,” he wrote in ComputerWeekly. “We instinctively think human-to-human contact is needed to teach — but as a result of technology we can now do much of this virtually, using video links, virtual role plays, augmented reality and simulations.”
Most people have been educated in person their entire lives, and this habit can be hard to break. With new advances in technology, however, employees can refresh their skills virtually. Remote learning also has the benefit of taking place anywhere and at any time, catering to the busy schedules of all professionals.
2. The Rise of AI and Machine Learning
While AI will inevitably replace many jobs, it will also serve to educate employees to take on their next challenges. Christopher Pappas, expert in training management software and founder of the online resource eLearning Industry, explains in a blog post how technology and virtual training will change education: “The system will be able to predict every eventuality and desired outcome in a matter of seconds, then deliver eLearning content that caters to online learners’ individual needs, preferences, goals, and areas for improvement.”
There are lots of technologies that attract our attention – and money – these days. We’re obsessed with blockchain, cryptocurrency, IOT, big data analytics, cybersecurity3-D printing and drones. We’re excited about virtual reality, augmented reality and mixed reality. We love talking about driverless cars, ships and planes. We can’t wait for 5G and Wi-Fi domes that solve all of our network access problems; and while we’re getting a little worried about social media and privacy, we’re still addicted to our ever-more-powerful smartphones. We buy everything online. We’re into wearables. But there’s one technology that we all need to embrace: artificial intelligence (AI). While there are other families in the disruptive digital technology world, this one is special and one you cannot afford to treat as just another emerging technology. AI powers, amplifies and therefore supersedes them all.
Why So Special?
First, AI is special because it’s more than one technology. In fact, it’s a family of technologies. Secondly, AI is special because its application potential is so wide. Next, AI is special because it learns and sometimes even self-replicates. AI’s also special because it satisfies ROI models of all shapes and sizes. Finally, AI is everywhere: which companies – and countries – are not investing in AI? There’s a bona fide arms race underway among the players (which shows no signs of slowing anytime soon).
What is AI?
AI includes at least machine learning, deep learning, image recognition, robotic process automation, natural language processing, text mining, vision systems, speech systems, neural networks and pattern recognition, among other methods, tools and techniques that according to the father of AI, John McCarthy, represent “the science and engineering of making intelligent machines, especially intelligent computer programs.”
What Can AI Do?
There is very little AI cannot do. The range of applications is staggering, including all of the vertical industries and every business process and model that supports them. AI will profoundly impact healthcare, transportation, accounting, finance, manufacturing, customer service, aviation, education, sales, marketing, law, entertainment, media, security, negotiation, war and peace. No industry or process is safe from the impact that AI – across all of its components – will have in the short-run and especially over the next seven to ten years. Keep in mind also that AI will integrate across business and technology architectures, data bases and applications.
What Will AI Change?
Everything. The timing – as always with the adoption of emerging technologies – is debatable. But the changes will not all be good. AI empowers good and evil. Note the ease with which fake news can be created and disseminated by intelligent “news” creators, and how easy it is for smart bots to service personal and professional confirmation biases intended to manipulate thinking and behavior. At the same time, good bots will make much of our personal and professional lives more efficient and productive, freeing us to pursue other activities. Will AI eliminate jobs? Of course, and this time the elimination of jobs will include so-called knowledge workers as well as the traditional manufacturing jobs we associate with automation and robotics which will increasingly behave in unsupervised contexts. Much of this capability will arrive simultaneously across whole industries, such as the automotive industry which will utilize robotic AI to manufacture driverless cars and then manage their movement across cities and towns across the world. Similarly, healthcare will be impacted from lifestyles, monitoring, diagnosis and treatment. No, AI will not kill us, but it will augment and replace many of us in the workplace. Again, it’s a question of when, not if, but impact will be sweeping and will likely happen much faster than many analysts predict. Regardless of how bullish or bearish you are about displacement, it’s safe to say that tens of millions of jobs – and knowledge-based careers– will be impacted — and in many cases eliminated — in the next five-to-seven years.
Who isn’t? Investments in all things intelligent are unprecedented. All of the major technology companies are heavily invested in the technology, but the most important investment portfolio belongs to whole countries which have declared AI as a strategic national objective. China, for example, has defined AI as one of its core industries.
What to Do About AI?
If your company is not already investing in AI, it’s way past time. Step one is the modeling of your current and aspirational processes informed generously by the potential of AI and predictions about the evolution of your industry. Elaborate process models should be developed, tested, simulated and inventoried to inform your AI pilot agenda. The simplest way to build this agenda is to identify the processes most amenable to AI and simulate the impact intelligent systems might have on the costs and benefits of the target processes. The most robust simulations should rank-order the processes that should be piloted with new technologies. Corporate partnerships should also be aggressively pursued, especially since AI is so broad. Companies need enabling partners that, for example, provide AI development and application platforms (which will come from their cloud providers in most cases). University partnerships are also valuable. Companies should befriend AI start-ups. Many AI technology companies will scan the start-up terrain for acquisition targets. Just as many established companies will scan the same environment for the same reason.
National governments should strategically commit to AI. This means that the national research laboratories – like the National Science Foundation in the US – should receive additional, directed funds to pursue a broad program of research and development that assures a global presence in the development and application of AI, which should be declared a 21st century moonshot.
Do you have the right AI talent in your company? If you administered an AI IQ test back at the ranch, how well would the team do? If your company is like most, you will need to invest in AI education and training starting with executive education about the strategic role of AI in your industry.
Finally, brutal process assessments are necessary to optimize AI. No process should be exempt from what AI might offer. But make no mistake, much of this is political. There will be Luddites who challenge the applicability and power of AI. But AI is different from the other “stand-alone” emerging technologies. AI can disrupt your business in ways you need to identify – before you’re disrupted. So do you need an AI “czar”? You absolutely do.
Just last month I traveled to Las Vegas to serve as a panelist for the Travel Agent Forum’s Technology Boot Camp. During the three-day conference, I spoke to many agents who were in various stages of the business. Upon my return, I conducted an informal email assessment relating to the questions I was asked during the panel discussion.
This column will be the first of a series of articles that puts the spotlight on why technology is quickly becoming a requirement for success—and how you can evaluate strategies to embrace solutions that will meet the needs of your agency.
With technology, I’ve found that agents fall into three categories: those who love and embrace it, those who hate it and those who don’t know enough about it to have an opinion.
Those who have embraced technology are working it into every aspect of their business to make their agencies more efficient and to create exceptional experiences for their clients. Those who hate it typically aren’t looking to grow their business or are very resistant to change. Then there are those who are by no means opposed to technology but aren’t quite sure where to begin. They might begin to research a given product or tool but don’t believe they have the knowledge or skill sets to ensure the right choices for their agencies.
For the uninitiated, the best place to begin is with your website, which will serve as the epicenter of your agency. Everything else will build from there.
Here is a list of questions to ask a prospective website solution provider:
—Will you have full control of customizing the website?
—Do you need to create travel content?
—Can you create your own content?
—Will the website offer integrated tools to easily create a marketing workflow?
—Does the provider offer indepth training?
—Does the platform provide financial value?
—Is the provider in it for the long haul?
—What does the provider’s one-, two- and three-year roadmap look like – and is it willing to share that?
—How – specifically – can the provider’s website solution help grow your business? (Its answer will tell you a lot.)
—What distinguishes the provider from the competition?
My travel agency has grown tenfold because of my Agent Studio website and social media marketing efforts. I swear by technology and I firmly believe that those who do not embrace it will eventually be left behind.
Don’t become irrelevant. Kick off your technology strategy with a robust website that entices visitors to want to know more about your travel services. That’s when everything will begin to fall into place.