Zoom drove tremendous growth at the beginning of March because of the coronavirus. It went from hosting 10 million daily meeting participants at the end of last year to over 200 million daily meeting participants this year in March. The influx naturally attracted bad actors who took advantage of Zoom’s security flaws resulting in “zoombombings” and highlighted other security and privacy issues once researchers started to finally pay attention.
Last week I wrote an analyst recommendation related to Zoom here that I believe certain types of organizations should not use Zoom until the company completes its 90-day security and privacy investigations and fixes of its own. Many schools have already defected to other services based on that perceived risk which was driven, in part, by a surprising FBI warning here, specific to “classrooms”.
One comparison to Zoom I made in my article was to Facebook, which raised some eyebrows, including from other industry analysts. I want to explain what I mean here.
Zoom security and privacy problems did not start this year
Zoom’s security problems did not begin with “zoombombings,” and security issues go back to at least last March of 2019. In CEO Eric Yuan’s apology blog, he says, “Thousands of enterprises around the world have done exhaustive security reviews of our user, network, and data center layers and confidently selected Zoom for complete deployment.” While many enterprises have chosen Zoom, Zoom is no stranger to security issues, and I would even say it has ignored security issues.
According to security researcher Jonathan Leitschuh, Zoom had a zero-day security flaw as early as last March, which wasn’t addressed until that July. In the Six-Five podcast, Daniel Newman at Futurum Research and I discussed this security flaw and how Zoom’s initial response was poor. The security flaw had to do with websites being able to forcibly open a Zoom call on a Mac and turn on a user’s webcam. Zoom waited until Apple stepped in to fix the problem and disclose the zero-day security issue. In Yuan’s first response in 2019, he considered it a low-risk vulnerability to later saying they didn’t act quickly enough, and the security risk was misjudged.
I believe the way Zoom handled that security risk is all too familiar to the way Facebook handled its user data in the Cambridge Analytica incident. Between 2008 and 2015, Facebook allowed apps to collect user data from people who used those apps and their friends, which violated users’ terms. Cambridge Analytica was one of the companies that exploited the data. Facebook knew about the breach and signed an agreement with Cambridge Analytica for them to delete the user data, but they continued to use it while Facebook turned the other way. Facebook didn’t respond until it was revealed 3 years later being tied to the 2016 election. Mark Zuckerberg responded by making the same promise it made 8 years ago, which was to add privacy controls that are easier for users. Facebook responded to the problem too late, and it waited until it blew up to fix it.
Zoom and Facebook both knew they were moving too fast
When Facebook made that promise in 2010, Mark Zuckerberg said in the Washington Post, “sometimes we move to fast.” This was in response to Facebook advertisers having access to user’s private information using a privacy loophole. In an interview with CNN over Zoom privacy concerns, Eric Yuan says the same thing, “We were moving too fast.” Granted, Yuan is saying this in the context of the coronavirus. It doesn’t change the fact Yuan and Zuckerberg were both talking about privacy issues with their services.
Zoom let users log in to Zoom using Facebook for a convenient way to access the platform, yet it violated user privacy by releasing user information to the Facebook SDK. Zoom had the right idea but was moving too fast, and it cost users’ privacy.
Similarly, user data was being sent to a data mining tool so that users could view other LinkedIn account without their consent. Zoom has removed the feature, but we have to keep in mind that it was a feature. Zoom had the right idea to connect enterprise users, but it was moving too fast, and again, it cost users’ privacy. I believe Zoom should contact customers whose personal data was unknowingly shared with Facebook and LinkedIn.
Dishonesty in regards to customer encryption level?
The more unfortunate way that I believe Zoom acted like Facebook was potentially being dishonest to its customers is related to security levels. I know that’s harsh, I don’t want it to be true, but I cannot see how any tech company could accidentally mis-label levels of security. It’s akin to saying a dog has four legs when it only has two. This isn’t a shade of gray, it’s a black and white.
Zoom told its customers they could hold end-to-end encrypted (E2E) conference calls. As an enterprise-level video conferencing solution, E2E conferencing shouldn’t even allow Zoom to decrypt the calls. However, Zoom is E2E by Zoom’s definition of E2E, which isn’t E2E at all. It uses TLS. In other words, Zoom has encryption for its conference calls, but they can access the audio and video data from calls.
To make matters even worse, Zoom claims it uses AES256 encryption keys. However, in a security analysis by the University of Toronto’s citizen research group, it shows Zoom uses a single AES128 key in a home-grown “ECB mode”, which isn’t as secure as promised. It also revealed that some keys used to encrypt and decrypt calls are sent to its servers in China. Zoom has since responded to the University of Toronto’s findings with an apology and a promise to fix it. In the same press release that Zoom said it uses its definition of E2E, it also said, “it has never built a mechanism to decrypt live meetings for lawful intercept purposes.”
Let me be clear to those aren’t security or even experts- encryption schemes aren’t shades of grays, they are black and whites. Me saying my security is 128-bit when it’s 256-bit is like me trying to sell you a used car with two wheels when the ad says four.
If Zoom were dishonest about its E2E encryption, speculatively, this is something Zoom could be dishonest about as well. It could lead Zoom down a similar path as Facebook when it was in trouble with the FTC.
Wrapping up and a recommendation
Facebook’s path of making and breaking promises is not a path that Zoom wants to go down. This is especially true for an enterprise-level video call solution that is now suddenly popular with schools and consumers, too. So far, I believe Zoom has been slow to react to security flaws, allowed features to compromise the privacy and security of users, and has potentially been dishonest (inaccurate for sure) about its level and type of video and audio encryption. Its reaction historically has been to apologize and promise to fix the issue with little to no fix. Like Facebook.
Over the next 90 days, Zoom says here it is dedicating resources to fixing these critical issues and improving its security and privacy. I am very skeptical if it can fix everything it needs to fix to get more secure and private in 90 days. And once the issues that get “fixed” will the experience be the same? Finally, outlined in the company’s apology, was that it was launching a CISO council. Having sat on many councils, and still do, this strikes me more of a marketing exercise than a security exercise. I would recommend that the CISO council minutes are published to the public. Now that’s transparency!
What I think is the more important question is if we can trust a company who has said so much that turned out to be wrong for so many. To increase the level of trust, I think Zoom’s board of directors should bring in an outside firm to investigate what went wrong with security and privacy and how it can fix it, even if it means some sweeping changes to change the culture.
For the sake of its customers, partners and employees, I hope Zoom changes or else it will be a sad ride down.
Note: Moor Insights & Strategy co-op Jacob Freyman contributed to this article.