The man who called himself “Mo” had dark hair, a foreign accent and — if the pictures he e-mailed to federal investigators could be believed — an Iranian military uniform. When he made a series of threats to detonate bombs at universities and airports across a wide swath of the United States last year, police had to scramble every time.
Mo remained elusive for months, communicating via e-mail, video chat and an Internet-based phone service without revealing his true identity or location, court documents show. So with no house to search or telephone to tap, investigators turned to a new kind of surveillance tool delivered over the Internet.
The FBI’s elite hacker team designed a piece of malicious software that was to be delivered secretly when Mo signed on to his Yahoo e-mail account, from any computer anywhere in the world, according to the documents. The goal of the software was to gather a range of information — Web sites he had visited and indicators of the location of the computer — that would allow investigators to find Mo and tie him to the bomb threats.
A federal court will be scrutinizing one of the National Security Agency’s worst spying programs on Monday. The case has the potential to restore crucial privacy protections for the millions of Americans who use the internet to communicate with family, friends, and others overseas.
The unconstitutional surveillance program at issue is called PRISM, under which the NSA, FBI, and CIA gather and search through Americans’ international emails, internet calls, and chats without obtaining a warrant. When Edward Snowden blew the whistle on PRISM in 2013, the program ...
We’re learning an important lesson about cutting-edge voice technology: Amazon’s Alexa is always listening. So are Google’s Assistant and Apple’s Siri.
Putting live microphones in our homes has always been an out-there idea. But tech companies successfully marketed talking speakers such as the Amazon Echo and Google Home to millions by assuring they only record us when we give a “wake word.”
That turns out to be a misnomer. These devices are always “awake,” passively listening for the command to activate, such as “Alexa,” “O.K. Google,” or “Hey Siri.” The problem is they’re far from perfect about responding only when we want them to.
The latest, and most alarming example to date: ...