The Xbox has gone through several visual periods during its life span, from an edgy and yet somehow dorky green alien thing, to a modern look that could be described as “I know how to use Excel but I can still have fun.” But like stumbling on a Facebook album from high school, you can still hold on to a bit of the past. As spotted by senior editor Tom Warren, the original Xbox background is now an option for the Xbox Series X / S.
The new (old) styling was added as a new dynamic background as part of Tuesday’s system update, which notably also brought improvements to quick resume. Titled “The Original,” it looks like a higher-resolution version of the glowing green orb that was at the center of the first Xbox’s user interface. Please note: it’s not the interface itself (Microsoft wouldn’t abandon tiles like that), but it is a recognizable part of it.
My experience with the original Xbox is admittedly secondhand. To me, it was the loud box that lived at my friend’s house and let us play Halo: Combat Evolved. But I do think you can get a pretty solid hit of nostalgia just by looking at this background and remembering what used to be. A simpler, more green time, when consoles were consoles and not Metro-inspired (or I guess Fluent Design-inspired) pseudo-Windows machines.
Microsoft and the Xbox team have been through a lot since the 2001 launch of the Xbox — the Xbox One was briefly positioned as a sort of cable box — but there’s some charm missing in the current dashboard and user experience. That charm was exchanged for a mostly better, if more complicated experience overall, but the heart still remembers what the brain forgot.
Spotify’s adding another big name to its list of exclusive podcasts: Dax Shepard and his show Armchair Expert, which is one of the most popular podcasts running.
All past and future episodes will be available exclusively on Spotify starting July 1st. Along with the exclusive distribution rights to Armchair Expert,Spotify is also signing a first look deal with Armchair Umbrella Network, meaning it gets first dibs on any other shows the network creates. The show will be exclusively licensed to the company for an undisclosed number of years. The terms of the deal were not shared, but the program will move over to Megaphone, a Spotify company, for hosting, and Spotify will handle ad sales in-house
The deal harkens back to Spotify’s Joe Rogan partnership. Similarly, Rogan’s show, The Joe Rogan Experience, went exclusive to Spotify in September last year, although clips continue to live on YouTube. Armchair Expert is widely considered one of the world’s most popular shows, and Forbes estimated in 2019 that it had a monthly audience of 20 million people, putting Shepard near the top of its list of highest-earning podcasts. Also on that list are Rogan and Bill Simmons, who sold his company The Ringer to Spotify in 2020.
Clearly, Spotify has centered its podcasting strategy on bringing the biggest names to its platform exclusively. That seems to be working, given that the company said last month that it grew its premium subscribers by 21 percent year over year and that people were listening to podcasts for longer periods of time. It also added that The Joe Rogan Experience performed “above expectations.”
A key component of Spotify’s podcasting moves is that it makes shows available to both free and paying users, and also includes ads for both of them. This means that Spotify makes ad money on every podcast listen. With Armchair Expert, the company can bring more people to Spotify, offer another popular show exclusively, and sell more ads, all in a quest to become the dominant place people consume audio.
Humble Bundle is launching a new bundle to raise money for COVID-19 relief in India and Brazil, which have recently seen a surge in COVID-19 cases. The new Humble Heal: COVID-19 Bundle is jam-packed with a lot of great games, including the cult hit RPG Undertale, mind-bending puzzler Baba Is You, turn-based strategy games Into the Breach and Wargroove, as well as ebooks and software.
Humble Bundle says that all of the content in the bundle is worth more than $640, if everything was purchased on its own, but you can get everything in it for as little as $20.
After a couple of generations making phones with flip-out cameras and increasingly large displays, Asus has taken the ZenFone 8 in a totally different direction: small.
The flipping camera concept lives on in the also-new ZenFone 8 Flip, but it’s no longer a standard feature across this year’s ZenFone lineup. Instead, priced at €599 (about $730), the ZenFone 8 lands in the upper-midrange class with a conventional rear camera bump and a much smaller 5.9-inch display. As a side note, final US pricing is TBD — Asus says somewhere between $599 and $799 — but it will be coming to North America, unlike last year’s model.
Rather than an attention-grabbing camera feature, the focus of this design has been to create a smaller phone that’s comfortable to use in one hand, which Asus has done without skimping on processing power or higher-end features.
It’s an Android iPhone mini, and it’s fantastic.
Asus ZenFone 8 screen and design
The ZenFone 8 may be small, but that hasn’t kept it from offering the latest flagship processor: a Snapdragon 888 chipset, coupled with 6, 8, or 16GB of RAM (my review unit has 16GB). I can’t find fault with this phone’s performance. It feels responsive, animations and interactions are smooth, and it keeps up with demanding use and rapid app switching. This is performance fitting of a flagship device.
The display is a 5.9-inch 1080p OLED panel with a fast 120Hz refresh rate that makes routine interactions with the phone — swiping, scrolling, animations — look much more smooth and polished than a standard 60Hz screen or even a 90Hz panel. By default, the phone will automatically switch between 120 / 90 / 60Hz depending on the application to save battery life, but you can manually select any of those three refresh rates if you prefer.
The display’s 20:9 aspect ratio was carefully considered by Asus. The company says it settled on this slightly narrower format so the phone would fit more easily into a pocket, and it does. I can’t get it all the way into a back jeans pocket, but it mostly fits. More importantly, it fits well inside a jacket pocket and doesn’t feel like it’s going to flop out if I sit down on the floor to tie my shoes. The ZenFone 8 is rated IP68 for dust protection and some water submersion.
The front panel is protected by Gorilla Glass Victus and houses an in-display fingerprint sensor, while the back uses Gorilla Glass 3 with a frosted finish that’s on the matte side of the matte / glossy spectrum. The front panel is flat, but the rear features a slight curve on the long edges for an easier fit in the hand. At 169 grams (5.9 ounces), it’s heavy for its size, and it feels surprisingly dense when you first pick it up. The phone’s frame is aluminum, giving the whole package a high-end look and feel. There’s even a headphone jack on the top edge as a treat.
The power button (an exciting shade of blue!) is well-positioned so my right thumb falls on it naturally with the phone in my hand. Same for the in-screen fingerprint sensor: the target appears to be positioned higher on the screen than usual, but that actually puts it within a comfortable reach of my thumb.
I’ll admit up front that I have a personal bias toward smaller phones, but the ZenFone 8 just feels great in my hand. I’ve spent a lot of time using big devices over the last six months, and I’ve gotten used to it. But the ZenFone 8 is the first device that feels like it was adapted to me, not something I’ve had to adapt to using.
Asus ZenFone 8 battery and software
The phone’s small size makes a smaller battery a necessity — 4,000mAh in this case, much smaller than the ZenFone 6 and 7’s 5,000mAh. I felt the difference in using this phone versus a battery-for-days budget or midrange phone, but I had no problem getting through a full day of moderate use. I even left Strava running for 20 hours by accident, and the battery still had some life in it the next morning. The ZenFone 8 supports 30W wired charging with the included power adapter, which takes an empty battery to 100 percent in a bit more than an hour. Wireless charging isn’t supported, which makes the ZenFone 8 a bit of an outlier in the flagship class.
Asus offers a ton of options to help stretch day-to-day battery life as well as the overall lifespan of your battery. There are no fewer than five battery modes to optimize phone performance or battery longevity on a daily basis, and different charging modes let you set a custom charging limit or stagger charging overnight so it reaches 100 percent around the time of your alarm for better battery health. You won’t find class-leading battery capacity here, but rest assured if you need to stretch the ZenFone 8’s battery, there are plenty of options.
The ZenFone 8 ships with Android 11, and Asus says it will provide “at least” two major OS with security updates for the same timeframe. That’s on the low side of what we’d expect for a flagship phone, especially compared to Apple’s typical four- or five-year support schedule. An important note for US shoppers is that the ZenFone 8 will only work with AT&T and T-Mobile’s LTE and Sub-6GHz 5G networks; you can’t use this phone on Verizon, and there’s no support for the fast, but extremely limited, millimeter-wave 5G networks.
Asus ZenFone 8 camera
There are just two cameras on the ZenFone 8’s rear camera bump, and they are both worth your time. Rather than cram in a depth sensor, macro, or some monochrome nonsense, Asus just went with a 64-megapixel main camera with OIS and a 12-megapixel ultrawide. They’re borrowed from last year’s model, minus a telephoto camera and the flipping mechanism.
As in the ZenFone 7 Pro, the 8’s main camera produces 16-megapixel images with vibrant color and plenty of detail in good light. Images can lean a little too far into unnatural-looking territory, and some high-contrast scenes look a little too HDR-y for my liking. But overall, this camera does fine: it handles moderately low-light conditions like a dim store interior well, and Night Mode does an okay job in very low light, provided you can hold the phone still for a few seconds and your subject isn’t moving.
A skin-smoothing beauty mode is on by default when you use portrait mode, and it is not good. Skin looks over-smoothed, unnaturally flat, and brightened, like your subject is wearing a couple of layers of stage makeup. Turning this off improves things significantly.
The ultrawide camera also turns in good performance. Asus calls it a “flagship” grade sensor, and while that might have been true in 2018, it’s at least a step up from the smaller, cheaper sensors often found in ultrawide cameras. Likewise, the front-facing 12-megapixel camera does fine. Beauty mode is turned off by default when you switch to the selfie camera, and thank goodness for that.
There’s no telephoto camera here, just digital zoom. On the camera shooting screen, there’s an icon to jump to a 2x 16-megapixel “lossless” digital zoom to crop in quickly, which works okay, but it isn’t much reach, and it just makes the limitations of the small sensor and lens more obvious.
On the whole, the camera system is good but not great. The lack of true optical zoom or a telephoto camera is a disappointment, but you can’t have everything on such a small device, and I’d personally take an ultrawide before a telephoto any day.
The ZenFone 8 fills a void in the Android market for a full-specced, small-sized device. The Google Pixel 4A is around the same size, but it’s decidedly a budget device with a step-down processor, plastic chassis, and fewer niceties like an IP rating or a fast-refresh screen. Aside from battery life, which is manageable, you give up very little in the way of flagship features to get the ZenFone 8’s small form factor.
You have to look to iOS for this phone’s most direct competition: the iPhone 12 mini, which it matches almost spec-for-spec from the IP rating down to the camera configuration. The 12 mini is actually a little smaller than the ZenFone 8, and when you factor in storage capacity, it’s likely to be the more expensive choice at $829 for 256GB. However, when you consider that the 12 mini will probably get a couple more years of OS and security support, it may be the better buy in the long run, if you’re flexible in your choice of operating system.
I like the ZenFone 8 a lot, but I’m not sure it’ll find a big audience, at least in the US. Apple is having trouble selling the iPhone 12 mini, and if there’s one thing Apple is good at, it’s selling phones to US customers. As much as I hate to entertain the idea, maybe we’ve gotten used to giant phones. I love how the ZenFone 8 feels in my hand and in my pocket, but I do notice how much smaller the screen and everything on it seems compared to the bigger phones I’ve used recently.
There are also a few important considerations, like the lack of compatibility with Verizon and the comparatively short support lifespan of the phone. If you need the absolute best in battery life the ZenFone 8 can’t offer that, and if you want a class-leading camera, you’ll need to look elsewhere.
All that said, the ZenFone 8 will be the right fit for a specific type of person, and I can heartily recommend it to my fellow small phone fans. You’ll get flagship-level build quality and performance quite literally in the palm of your hand.
Asus is taking a slightly different turn with this year’s ZenFone series. While the ZenFone 8 Flip looks a lot like previous years’ phones, with its large screen and flip-out camera mechanism, the company went back to the drawing board for the flagship ZenFone 8 and redesigned it as a smaller one-hand-friendly device: kind of an Android iPhone mini. The two phones make their global debut today, priced at €599 for the ZenFone 8 and €799 for the ZenFone 8 Flip. Asus says that only the ZenFone 8 will come to North America; it is expected this summer. The US price is still being finalized, but the company says it will cost somewhere between $599 and $799.
The ZenFone 8 and 8 Flip both use a Snapdragon 888 chipset, but that’s about as far as the similarities go. The ZenFone 8 features a 5.9-inch 1080p OLED display with a fast 120Hz refresh rate. It will be sold in configurations of up to 16GB of RAM and 256GB of storage and includes an IP68 waterproof rating. Both the 8 and 8 Flip support 5G — but when the ZenFone 8 arrives in the US, it will only work on AT&T and T-Mobile’s LTE and sub-6GHz 5G networks.
The ZenFone 8’s two rear cameras are borrowed from the ZenFone 7 series: a 64-megapixel standard wide with OIS that kicks out 16-megapixel images and a 12-megapixel ultrawide. Since the camera array doesn’t flip forward to play the role of a selfie camera, there’s now a 12-megapixel camera under an off-center hole punch on the front panel.
The phone’s compact size is reflected in its 4,000mAh battery, which is much smaller than previous years’ 5,000mAh cells. It supports 30W wired charging with the included charger, but it doesn’t offer wireless charging. There are dedicated dual stereo speakers and even a 3.5mm headphone jack.
The ZenFone 8 Flip is, by necessity, a much larger device with a 6.67-inch screen — a 1080p OLED panel with a 90Hz refresh rate. It offers a bigger 5,000mAh battery with 30W wired charging, includes up to 8GB of RAM and 256GB of storage, but it lacks an IP rating.
The main attraction, of course, is its flip-out camera array. The triple-camera hardware is borrowed from the ZenFone 7, including a 64-megapixel main camera, a 12-megapixel ultrawide, and an 8-megapixel telephoto with 3x optical zoom. Asus says the module itself has a stronger motor with better endurance; users can expect to get up to 300,000 “flips” out of it. A custom RhinoShield case will be sold separately in some markets with a sliding cover to protect the housing and a sensor that automatically activates the camera when the cover is opened.
Slate is getting into the audiobooks business. The online magazine and podcast subscription seller is launching its own audiobook store today in partnership with multiple publishing companies. The store will list and sell popular titles but with the added benefit of making the audio accessible through listeners’ preferred podcast app instead of a separate audiobook-only platform. This is likely its biggest sell for listeners, although Slate will compete on price, too. Listeners also will buy these books a la carte, meaning they don’t have to subscribe to an ongoing membership as they may through Audible, the biggest name in audiobooks.
The store and its functionality are powered through Slate’s Supporting Cast, its technology that powers recurring revenue audio services, like subscription podcasts. This means that on the back end, Slate is hosting publishers’ audiobooks on its servers and creating private RSS feeds for them, which can then be inserted into any podcasting app that supports them, like Apple Podcasts, Pocket Casts, and Overcast. The process basically looks like this: Listeners navigate to Slate’s store, buy a book, and can then either listen online or they can tap on the app of their choice to have the feed automatically inputted. They can also manually copy and paste the feed.
David Stern, vice president of product and business development, tells The Verge that its software automatically looks for suspicious activity and will revoke access if it suspects someone is sharing their private RSS link outside of a “very small flexible range.”
Initial partners include Penguin-Random House, Simon & Schuster, HarperCollins, and Hachette. Slate wouldn’t disclose its royalty agreements with these companies. The initial catalog is small, especially compared to Audible’s thousands of titles, but Slate seems to be interested in books that its team has reviewed for the website. As evidence for why Slate thought it should pursue an audiobook business, the company says it’s generated more than $1 million through its book affiliate business, and that it tested selling Danny Lavery’s Something That May Shock and Discredit You audiobook and sold 500 copies.
Slate’s move into audiobooks continues the trend of podcast-oriented companies looking to audiobooks and audiobook companies looking to podcasts. Spotify launched audiobooks in its app, hosted by celebrity talent, earlier this year and has reportedly put the founder of Parcast in charge of its audiobook efforts, per a Bloomberg report. Audible also brought podcasts to its app for the first time last year. (Apple, for its part, sells audiobooks through its Books app, not Podcasts app.) The broader bet seems to be that people who enjoy listening to things will want to do so from one app.
“It’s sort of a no-brainer,” Stern says.
Slate is positioning itself to let people choose what app they want to listen within, although neither Spotify nor Audible support private RSS feeds.
We’re nearly six months into the life of the PlayStation 5, but exclusive games that really showcase the power of the hardware are still relatively rare. That’s part of what made last month’s Returnal so exciting. It’s also a big reason why the upcoming Ratchet & Clank: Rift Apart is so highly anticipated.
Ratchet has always been an incredible-looking franchise — just look at the 2016 reboot on the PS4, which was reminiscent of an animated movie — and the latest promises to offer new features only possible on Sony’s new console. It’s still a goofy shooter-platformer filled with weird gadgets, but Rift Apart also features incredibly detailed sci-fi worlds to explore and the titular “rifts” which let players instantly jump into new areas without any loading. (To see some of that in action, check out this recent, lengthy gameplay trailer.)
Ahead of the game’s launch on June 11th, I had a chance to talk to Mike Fitzgerald, core technology director at developer Insomniac Games, about the studio’s move into next-gen. He was able to get into the nitty-gritty of working on the console (in addition to Rift Apart, Insomniac has also released PS5 versions of two Spider-Man games), including some of the challenges of learning as you go. “This title is the first one where we made the content knowing it would only ever be running on the PlayStation 5,” he tells The Verge. “And so our artists would say ‘What kind of mesh density can I have?’ And I’d be like ‘… I don’t know.’ Because we didn’t have the hardware.”
Read on for our full conversation touching on what the team learned from Spider-Man, designing games with ray tracing in mind, why making realistic-looking metallic surfaces is so important, and much more.
This interview has been edited and condensed for clarity.
What were your first impressions of the PS5 when you finally learned what it was all about?
We got a briefing before seeing the hardware: “Here’s what’s coming, here’s what our priorities are going to be.” Fortunately, we have a great relationship with them — well, we weren’t a part of PlayStation then, but now we are — but we have a close relationship and got to be involved with that stuff pretty early, and that informed the game we were putting together. In that presentation in particular, I think the storage and I/O solutions really stood out to us as something that would be transformative, both in terms of development and the types of games we make.
What was that initial experience like of working on the Spider-Man games on PS5?
It was an awesome experience of peeling back layers of that hardware and realizing we need to push our engine side of things more, rather than fighting against the development hardware. The spinning hard drive of the previous gen was always a big constraint for us. Making open-world games on the PS4 is a lot of being very careful with the content you put together, how it’s packaged up, the budgets it fits under, planning ahead of time where you’re going to need to be and when.
A lot of those problems just go out the door [on the PS5], which is a big deal. It’s not just the drive itself, but it’s the hardware decompression engine around it, it’s the memory transfers that we leveraged piece by piece, more and more as we went through the project, and realized “Oh, we can make these transitions even faster, we can do them in the middle of a fight.” It was really an evolving process with the console. And it definitely came down to roadblocks in our engine that we needed to pick apart. Some of those basic assumptions of how long it takes data to get off a drive we got to rethink.
Is your job a lot of saying “No”? An artist or designer comes to you with an idea, and, particularly on the older hardware, you have to say we just can’t do that. And how does it compare on the PS5?
Okay, probably not “Nos,” but “Yes, but…” is a common refrain. If an artist wants to accomplish something, or a designer, we try to figure out how to get there. But maybe it’s a point that the compromises are way fewer right now. Of course artists and designers also have a great sense for what the hardware can accomplish, and I think it was challenging ourselves to do new and different things now that the hardware is so different.
Given that this is the first Ratchet & Clank game you’ve worked on, what was your impression of the series? What interested you about working in this universe?
The PS4 title was gorgeous, and I think it’s been really fun to continue that march of progress from a realism perspective. This is what we did with the Spider-Man titles, and what a lot of games do. How human can your characters look? How realistic can New York City be? And then apply that same tech, and the rigor behind that tech, to a more fantastical, exaggerated animated aesthetic for the Ratchet games. That’s been fun. We have realistic materials and lighting, using ray tracing to bring more realism to it. But then we also have an alien whose entire head is an eyeball. The way the silly combines with the realistic I think brings a unique quality to the game that lets it show off the graphical techniques of the hardware.
Can you talk a bit about how you interact with other departments like art or game design? It sounds like the team is pretty collaborative.
My group is a shared group across multiple titles that we have going at the same time. Different groups within that core technology group have really close relationships with different productions. So we have some audio programmers who work really closely with the audio team, we have animation programmers who work with the animation teams, and so on and so forth. We’re pretty tightly tied in with project schedules. With projects in R&D, we let them be creative; make some mock-ups or concepts that go further than we could ever go in the engine, and then in pre-production, let’s take that and figure out how we can accomplish it and what we can put together.
So when you were working on the Spider-Man games, were you filing away ideas for things that would work well with Ratchet on PS5?
Always filing away ideas. I would even say some stuff that is essential to the Spider-Man game turned out to be cool for Ratchet, and then has some awesome quality effect on that game that we maybe wouldn’t have put in if we’d only been making Ratchet.
Do you have an example?
Spider-Man is an open-world title. We built all of this tech to stream that open world as you go through it. When you’re downtown, there’s not much Midtown in memory. You can see it from a distance, but then as you go farther north, we pull in those areas. No Ratchet game has ever been constructed that way. They’ve always been: here’s a level, load the level, now you’re in that level and you play it. But by switching over the Ratchet world to use that same streaming architecture, we can pack more and more density and content and quality in every corner of a Ratchet & Clank world, because we’re happy to ditch the west side of Nefarious City when you go to the east side, and that type of thing.
Does it make it harder to know when to stop, when you have this ability to cram so much into a game? When you no longer have the same level of technical restrictions, is it harder to say “Alright, this is ready to go”?
Yes, and I would say it’s even difficult to develop the content in the first place to some extent. This title is the first one where we made the content knowing it would only ever be running on the PlayStation 5. And so our artists would say “What kind of mesh density can I have?” And I’d be like “… I don’t know.” Because we didn’t have the hardware. And we didn’t know, as the engine evolved, how the trade-offs would manifest themselves. Even once you have the hardware, it still takes you months or a year for your engine to evolve into it where you know how you want to spend your frame budget, what you do on the GPU versus the CPU, all that kind of stuff. For this game in particular, I would say we kind of just let our artists go wild and make some incredibly detailed objects and models and textures, and then gave ourselves the challenge to make it all run well.
Obviously ray tracing is a big buzzword right now, but when you’re making a game knowing from the beginning that it’s going to be supported, does that change how you approach things like art or level design?
For the Spider-Man games, it was a lot of “This looks really cool, this will have a great effect on the buildings in the city.” That kind of thing. We had a lot of content that was in the first Spider-Man game that wasn’t necessarily authored to show that feature off, but we knew that it would be in Ratchet & Clank from pretty early on. One thing it does is, the artists know to put a lot of care into the material properties that they author. So this is a metal and it behaves this way, and all of those physical material properties, so when it comes together it fits nicely when ray tracing is turned on.
There are some big, obvious features we can see in terms of the benefits of the PS5, like the fast load times or the rifts that pull you into a parallel world immediately. But are there any examples of smaller, less obvious things that are cool or that you’re really proud of that wouldn’t have been possible on the PS4?
With the SSD, it’s easy to say there are no load times, and look how fast we can load this other area, but it has all sorts of knock-on effects. We don’t need to be as careful with how we package our data. All of the assets for an area don’t need to be collated on the spinning hard drive to get the right streaming speed out of it. It makes the game smaller on your hard drive; it means we can patch it more easily. That’s a nice bonus. We unload the things literally behind you from a camera perspective. If you spun the camera around, we could load them before you see that. That lets us devote all of our system memory to the stuff in front of you right now, that you need to experience in that moment.
The ray tracing is nice and shiny — well, literally shiny — and it’s very obvious when it’s working. But it does have a really subtle effect on the materials. There’s a part where you’re in the spaceship with Rivet and Clank, for example, and you’re not actually looking at a reflective surface per se, but just all of the metal things in that cabin, which are all curved in different ways, are all showing the effect of those characters shifting position in a realistic way. It takes us a long way toward getting the same feeling of an animated film. The way things are grounded in the environments, the way they’re animating with each other, helps us close that gap.
Is that the goal? To have it look like a high-quality animated film?
Certainly for this title, from a rendering quality perspective, we would love to be delivering stories in the same way that those films deliver stories, and having that emotional effect for players. I think between the performance capture we do know, the detail and density of the animation rigs that we have, we can tell some really good stories that I think can hit in the same way that the films hit.
Now that you’ve spent some time with the PS5, and the studio has made three games for it, what are some aspects where you’re excited to see where it goes in the future? Some feature where you can’t wait to see how Insomniac or other studios will be able to exploit it for future games. What do you think is the thing people will really be able to dig into?
Behind the scenes, there’s so much to peel back about the SSD and the I/O around it. We’re just scratching the surface of it. As a developer, that will be really cool to see how it turns out. I love seeing what the other internal PlayStation studios are doing, we have an awesome relationship with them. We don’t show each other everything all the time, so we still get that fun surprise and delight when we see what they’re doing and get to marvel at how good it looks… and then try to pick it apart and see how to do better.
In the future, our vehicles and homes will be in constant conversation with the power grid. Smart thermostats will send information about how much energy the home is using or potentially wasting to heat or cool itself. Solar panels will say how much energy they have on hand, while electric vehicles will share information about when and where they’re charging and how much juice they need for their travels. Solar and EV batteries might even offer up the energy they’re storing in case it’s needed elsewhere.
“You just plug it in, and somehow it automatically talks to its nearest neighbors,” explains Ben Kroposki, a director at the National Renewable Energy Laboratory. “[It] says, ‘Hey, I just want to let you know I’m out here. I can provide these kinds of services back.’”
That conversation is the backbone of what’s called a “smart grid.” While America’s aging grid system was built to send electricity in one direction — from power plants to homes and businesses — smart grids are a two-way street. Homes and buildings send information and electricity back to the grid or to other homes and buildings. An electric vehicle battery, for example, might be able to provide power to an area in the middle of a blackout. A smart grid also listens for directions from the utility, so that it charges whenever solar or other renewable energy is most abundant.
It’s a simple enough idea that for more than a decade has been sold as a way to improve the efficiency, environmental impact, and resiliency of the power sector. But electricity grids still have a long way to go to get “smart.” They’ve managed to fail spectacularly under the stressors of climate change and more extreme weather.
After years of underinvestment, there’s renewed hope that long-awaited smart grids might actually come to fruition. President Joe Biden can’t reach his goal of getting the power sector to run on 100 percent clean energy by 2035 without a smarter grid. And grids can’t get smarter without the kind of urgency that Biden has injected into overhauling America’s infrastructure.
“This probably is the most exciting time in the power system history in the last 50 years,” Kroposki says.
While Biden’s clean energy goals are vital for staving off a deeper climate crisis, the plan has exposed some weaknesses in our current grid system that a smart grid could help solve. For starters, old grids were built to accommodate a constant flow of electricity; power plants can ramp generation up and down at will to deliver as much energy as people demand.
Wind and solar power aren’t so consistent. When it’s sunny and gusty, too much energy might overwhelm the grid, leading to some of it going to waste. There also isn’t enough energy storage — aka batteries — to hold onto that excess renewable energy so that it can be used when sunshine and wind die down.
A smarter grid can better manage power demand, making use of renewable energy when it’s most abundant and preventing energy shortages. Embattled California utility PG&E, which has come under scrutiny for pervasive rolling blackouts in recent years, has partnered with BMW to implement “smart charging” for its electric vehicles. Their pilot “smart charging” program incentivizes EV drivers to charge their cars whenever there’s excess renewable energy, typically in the middle of the day.
That kind of coordination can also prevent blackouts by taking pressure off the grid when there’s peak demand, typically when people come home from work in the evening or crank up their air conditioners in the summertime. Managing that demand will become even more important in the race to electrify homes, buildings, and transportation so that they can run on renewable energy. Some cities have banned new gas hookups in favor of electricity, and California banned the sale of internal combustion engine vehicles starting in 2035. Electricity grids will have to brace themselves for all those changes. “We’re going to need to speed up the pace of the grid investments in order to keep up with everything that’s happening outside the grid,” says Karen Wayland, CEO of the GridWise Alliance whose members include utilities, tech, and energy companies.
A flood of new electric vehicles could overwhelm old, creaky grids. But EV batteries could become an asset in an updated, smarter grid. The same is true for residential solar power systems with batteries. They might provide backup power when extreme weather causes problems, like when a storm forces a power plant offline or when a heatwave drives up power demand for air conditioning. But to be able to do that, utilities need to build out a way to communicate with those batteries so that they know when they’re available and how much capacity they have.
In the middle of an outage, a smart grid can also sense excess power being wasted. It might have been able to divert power from empty downtown Houston skyscrapers to people facing freezing temperatures inside their homes during the Texas freeze earlier this year. “In laser, scalpel-like precision, you can turn the building lights off or down … and avoid having to do the rolling blackouts by being able to connect in real time to those assets,” Michael Bates, global general manager of energy at Intel, told The Verge at the time.
Seeing the potential of smarter grids, the Obama administration funneled $11 billion toward developing smart grids and the reduction of power outages. But the money wasn’t enough. Outages have been on the rise since 2009, when Biden announced the investment as part of an economic stimulus package.
Obama’s initial investments spurred the adoption of smart meters in the US, which can tell utilities how much energy a household is using at regular intervals. (Before that, utility workers had to come and read the meters). The future will be more granular; utilities may be able to read how much energy each appliance in your home is using. That initial funding was only a drop in the bucket in terms of what’s needed to unlock the full potential of smart grids. A 2011 report by the Electric Power Research Institute estimated that it would cost up to $476 billion over 20 years to fully modernize the grid. To make things harder, the Trump administration dismantled a smart grid advisory board that Obama had started and took other actions to kneecap grid modernization research.
Similarly, Obama set the US on course to slash greenhouse gas emissions — but not with the same urgency we’re now seeing under the Biden administration. That’s in part because so much time has been wasted, and the climate crisis has only grown more destructive and exacerbated by disasters that batter energy grids. “The utilities and the markets still felt like it was more of an evolution,” Bates tells The Verge. “I think everyone’s now starting to see this more as a revolution.”
That revolution is gaining momentum. A new advisory council formed last week to push for funding to modernize the power sector. Representatives from labor and environmental groups, utilities, and tech companies are all part of the council calling for a $50 billion investment. That would go toward making sure every household has a smart meter, plus installing sensors, controls, and other equipment across the grid to analyze and respond to energy supply and demand. There’s also a need for a better communications infrastructure for utilities — either through fiber optic or wireless networks.
The council is also backing Biden’s sweeping $2 trillion infrastructure plan, which proposes laying down new high-voltage transmission lines to make more resilient grids.
Beyond garnering the money and political will necessary to really get the ball rolling on modernizing the grid, there are more technical details to hammer out. Keeping an increasingly digital power system safe from hackers is one of them. So Wayland’s group is calling for $1 billion in funding for the Department of Energy to deploy cybersecurity technologies, and another billion to split between monitoring cyber threats and developing a cybersecurity workforce for the energy sector. In April, the Biden administration launched a 100-day action plan to safeguard utilities’ control systems from “increasing cyber threats.”
A security breach at a water treatment plant in Florida in February is one example of how vulnerable utilities can become. In that case, hackers tried to increase the concentration of a chemical in the water to poisonous levels. The attempt failed because a person working at the plant figured out what was going on and adjusted chemical levels back to safe levels. Safeguards in a smarter grid could prevent a similar attack on the power sector, Wayland says, by blocking any commands that fall outside a predetermined range of operations.
Another crucial detail to troubleshoot will be to figure out how electricity rates and energy bills will differ under a smart grid system. To better manage power demand, experts say that smart grids should respond in real time to changing electricity rates.
Smart grids could one day make suggestions for, or even automate, when people charge their cars or heat their homes. But it has to offer an incentive to get people to agree to do that. The personal payoff is taking advantage of lower electricity rates based on the time of day that you purchase electricity.
Right now, most residences pay fixed rates for electricity, which insulates their bills from sudden price changes related to supply and demand. On the other hand, with so-called time-of-use rates (which is more common for heavy industry than for residences), electricity rates can vary by the hour. During the Texas freeze, a similar kind of rate system shocked many homeowners. While people who signed up for that rate system might have saved money by paying wholesale prices throughout the year, some saw their electricity bills skyrocket by thousands of dollars because of fuel shortages. Electricity rates under a smart grid would be similarly variable, so there will need to be protections put in place to ensure that doesn’t happen under a new rate system with smart grids.
“Now that may only occur once every nine years, but when that occurs you get really upset,” says Henry Lee, director of the Environment and Natural Resources Program at Harvard Kennedy School. “And you run to your elected officials and you run to the regulators to say that they’re ripping me off.”
That complication illustrates the need for policy changes on top of advances in infrastructure and tech. “You have to build in some safeguards, you know, from that happening,” NREL’s Kroposki says.
Things will probably continue to get more complex in the race to “get smart.” After all, what a fully fledged smart grid really looks like is sort of a moving target. The realm of what’s possible will only grow with more technological breakthroughs.
“For me, it’s a portfolio of continually evolving solutions,” says Luis Munuera, an energy technology analyst at the International Energy Agency. “I don’t see so much an endpoint as a process.”
Best Buy is hosting a one-day flash sale on a few items, and its small assortment of gaming laptops stood out from the rest. The Razer Blade 15 Base seems like a good midrange laptop with its hexa-core Intel Core i7-10750H processor, Nvidia’s GTX 1660 Ti graphics chip, and 16GB of RAM. This model costs $1,100 today only, down from $1,500. In terms of other noteworthy specs, it has a 1080p display with a 120Hz refresh rate and Thunderbolt 3 for fast data transfer, or to connect to the company’s own external GPU enclosure.
It’s a bit disappointing that it comes with a paltry 256GB of storage, though it has an extra M.2 slot available to stick in another NVMe or SATA M.2 SSD. You can check out Razer’s page for more detailed specs of this specific laptop model right here, which lists it at $200 more than this deal at Best Buy.
Razer’s Blade 15 Base is thicker than the Advanced model, but it has the same good build quality, comfortable keyboard, and a Thunderbolt 3 port.
If you’re okay with a gaming laptop that makes no effort to look subtle, Asus’ ROG Strix G15 isn’t a bad deal at $850, also at Best Buy (normally $1,000). Compared to Razer’s laptop above, this one has the same processor, but with a bigger 512GB SSD and a slightly faster 144Hz refresh rate display. Though, it likely won’t be as good for gaming since its Nvidia GTX 1650 Ti is a notch below the 1660 Ti in terms of performance, and it has half the amount of RAM as Razer’s option above. Still, this seems like a fine machine if you don’t play the most graphically demanding games.
Asus’ bold-looking ROG Strix G15 might be an excellent fit for someone who’s looking for an entry-level gaming laptop under $1,000. It has a fast 144Hz refresh rate display and a capable processor, but its GTX 1650 Ti graphics chip and 8GB of RAM may not be enough to make every game look and run well at higher settings.
A speedy USB-C wall adapter is a good thing to have on hand for your Android phone, iPhone, or any other device like a Nintendo Switch or a laptop that can recharge from the reversible plug. RavPower has a two-pack of its 20W USB-C adapters selling for just $11.69 (before tax) at Amazon by clipping the 10 percent off coupon located right beneath its listing price. Many power-hungry devices require more than 20W to recharge at full speed, but these are a perfect fit for most phones — especially the latest iPhones, which don’t include a charger with purchase. Apple sells one separately, but it costs $20 for one instead of nearly half that price for two.
At Amazon, you can get a two-pack of RavPower’s 20W USB-C chargers for $11.69 by clipping the coupon on the product page. Compared to Apple’s $20 charger, this is a great deal.
Lastly, Daily Steals is offering a good deal on the unlocked Google Pixel 3 XL with 128GB of storage. You can get the “not pink” version of the phone that’s new with a one-year warranty from Google for $240 by using the offer code VERGEPXL3 at checkout. Despite being a few years old at this point, this phone still has good photography chops and it will be among the first devices to get the upgrade to Android 12 software later this year (though it will likely be the last major update coming to this phone, security patches aside). Check out Dieter Bohn’s original review here for photo samples and more.
One of the biggest esports tournaments in the world is coming back for 2021. Today, Valve announced that The International, the annual Dota 2 championship, will return after a one-year hiatus caused by the pandemic. This year’s iteration will take place in Stockholm, with the group stage starting on August 5th. The International is well-known not only for its intense, global competition, but also for being incredibly lucrative: this year’s tournament will feature a prize pool of $40 million.
While we know that the tournament will be taking place, it’s not clear yet whether fans will be in attendance. “As we continue to plan the event around the shifting landscape presented by the ongoing global pandemic, our focus remains on finding ways to hold a high quality tournament in the safest way possible,” Valve wrote in a blog post. “This means we’re waiting to release additional details on attendance options as we gather more information on developments heading into summer. We expect to be able to share more with the community during the month of June.”
As part of the announcement, Valve also introduced a new Dota 2 feature called supporters clubs. Players are able to buy in-game items like badges and loading screens, with 50 percent of sales going directly to their favorite esports team. The feature will support 17 teams starting today, with more expected to be added over time. “As more content from other teams is submitted and approved, they will be added to this list regularly,” Valve said.
My twin sister Alita and I have a credit problem: not because we’ve defaulted on loans or skipped bills, but because the US credit rating agencies can’t seem to tell us apart. Sometimes they associate her name with my social security number, sometimes it’s the other way around — and sometimes we both show up under the same SSN.
When I applied to work at The Verge, my background check gave my name as Alita Clark; Mitchell Clark was listed as an alias. Over and over, Alita and I have been rejected for credit cards, despite both having good credit. I was rejected for a car loan by a bank that I’ve used for years — despite having enough cash to immediately pay off the loan. Neither of us has had issues with getting access to housing, but it’s hard to feel sure it won’t happen in the future. The problem isn’t banks or lenders but the credit system itself, a vast and invisible information network with little incentive to correct even the simplest of problems.
If it were a single agency or company getting it wrong, I might be able to set the record straight, but the credit system is a thicket of overlapping forces, so densely woven that it can be hard to tell which part of the system is making the mistake. In the US, three companies keep track of almost everyone’s credit history: Equifax, TransUnion, and Experian. Most people will be familiar with their credit score — a number supposedly reflecting how reliably you pay back lenders — but these three companies draw on hundreds of data sources to come up with that number. As you can imagine, when you’re collecting tons of data on almost every adult in the country, mistakes happen.
In theory, consumers are supposed to have some recourse when the credit system screws up. Each agency has its own dispute process, with its own standard of documentation and evidence. Credit furnishers, the companies providing the credit reporting agencies with information, are also required to accept and investigate disputes. If those methods fail, consumers can file a complaint with the Consumer Financial Protection Bureau (CFPB), which will then forward it to the appropriate ratings agency.
But in practice, those investigations are anything but thorough. A 2012 CFPB investigation into how the credit ratings agencies manage data found that they hand off 85 percent of disputes to their furnishers — often without any of the evidence that consumers have included to back up their complaints. A lot of the time, by the time a mistake is resolved, it’s spread to another agency and the whole process has to begin again. The distributed nature of the system makes it impossible to pin responsibility for a mistake to any single party, or make anyone responsible for fixing it.
In lots of cases, such as mixed files or in cases of identity theft, the credit bureaus would be in the best position to know if information was incorrect, says Evan Hendricks, a credit reporting advocate who literally wrote the book on credit scores and how the system can go wrong. “But they don’t care. Because of the creditors instructing them to keep it on, they keep it on.” He says it was “the fundamental business model of the credit bureaus to faithfully put on your credit report what creditors furnish and to faithfully keep it on further creditors’ instructions once you dispute it.”
I found this out the hard way. Starting in 2017, I pulled my reports from all three agencies, hoping to figure out which ones had me listed as Alita and how to fix it. Since the system was already too mixed up for fact-based verification to work (I’m often asked about student or car loans that I’ve never heard of), I had to mail in a physical copy of my social security card and driver’s license, then wait for the reports to arrive by mail. Experian seemed to have everything right, but Equifax had my SSN listed as my sister’s. TransUnion had the right social security number, but my name was listed as Alita Clark.
I filed disputes, sending copies of my social security card, driver’s license, and birth certificate. When I checked back a few months later, it seemed like the fixes had mostly worked. My TransUnion report had my name and SSN, though Alita showed up as an alias. Equifax also had my information right, but it says I was “formerly known” as Alita. Both had my correct credit history. So far, so good enough.
In 2019, I applied for Apple’s new credit card, wanting to try it as soon as it came out. My application was denied, and after some digging, I realized it was likely being handled by TransUnion. I requested a report from them and got back a reply addressed to, you guessed it, Alita Clark. After months of working to fix the errors in my reports, they had crept back in — and someone else’s mistake was keeping me from getting Apple’s shiny new credit card.
My sister and I both filed complaints with the CFPB, and for a while, the situation seemed to be fixed. I finally got my Apple Card and was even able to access my credit report through online channels again. But it was only a matter of time before entropy slipped back in. I was rejected for a car loan late last year, and today I’m back to not being able to access my TransUnion or Equifax reports online. If I ever want to get a mortgage, I’ll likely have to get a lawyer involved.
For her part, my sister says she feels like “a ghost in the shell.” One credit bureau replied to her correction request (which included her social security card and driver’s license) with a letter saying they couldn’t fix it — which was addressed to Mitchell. “Clearly, they didn’t even look so why should I try again,” she told me. As for the reports she was able to get back after they had supposedly been corrected, they were “a bizarre Frankenstein” of our credit histories: alongside some of her real accounts, there are credit pulls from car insurance agencies I looked into, and even one from my local hospital — neither of which have shown up on any of my reports.
Errors Mitchell found when requesting his credit reports (NR = No report requested that year)
Alita listed as main identity
Alita’s SSN listed with Mitchell’s information
Nothing visibly wrong, no SSN shown
Mitchell listed as “Formerly known as Alita”
Alita listed as an alias
Alita listed as main identity
Currently awaiting report in mail
Alita’s SSN listed with Mitchell’s information
Currently awaiting report in mail
You rarely hear about these issues, but they’re surprisingly common — and not just for twins. People can be declared dead by credit agencies, get stuck in limbo after a name change, or just slip through the cracks. In 2012, the FTC asked 1,001 consumers to request credit reports from the big three agencies. Roughly 26 percent of the study participants found incorrect information on at least one of their reports. In 2015, the FTC in a follow-up study asked 121 consumers to examine unresolved disputes — and almost 70 percent of them believed that their errors still hadn’t been corrected. There have been congressional hearings, lawsuits with regulators, and decades of political pressure, but the system simply refuses to shape up.
There has been work done to make the credit reporting process more visible to the general public — in 2014, the CFPB called on credit card companies to start providing their customers with free access to their credit scores, and now many do, with some even letting non-cardholders access their score for free. This lets consumers keep an eye on their credit score, which could give them a warning that something’s gone wrong if it seems lower than it should be or if there’s a sudden change.
But that’s really only a first step. If you noticed a discrepancy on your credit report, that could just be the start of a long, drawn-out fight with one or more credit reporting agencies. Even when victims sue agencies and win, the damages aren’t great enough to incentivize better behavior. Hendricks tells me that even when cases end up with large punitive damages against the credit reporting agencies, they tend to get reduced. “[T]hey don’t achieve their purpose of punishing the company and more importantly deterring from continuing the same conduct,” he says. “So basically, they just haven’t been spanked hard enough to change.”
Alita and I both actually have credit cards in our names; I just got one as part of writing this article. While my main bank wouldn’t give me an auto loan, I was able to get one through the dealership’s bank, a local credit union. I was also able to pass the credit check that my new apartment complex pulled, somehow.
But we’re both lucky, in many ways. First, neither of us is causing trouble for each other. My sister’s even more financially responsible than I am, and she’s never been in any sort of legal trouble. But if she did default on a loan, would that show up on any of my reports? If I went to jail, would that prevent her from getting a job at some point? Quite honestly, we don’t know. The answer may actually be “it depends on what credit reporting agency you ask.”
But there are people for whom this kind of issue could be devastating. It’s not hard to imagine a situation where someone falls on hard times and is looking for a lifeline in the form of credit, and then discovers that they’re ineligible, due to some bureaucratic snafu. As the system currently stands, we have three companies that have a vast amount of control and little meaningful oversight. Until some sort of regulatory power steps in, the tangle of the system may only get worse.
Amazon is refreshing a handful of products in its Echo line: the Echo Show 8 and Echo Show 5, plus it’s adding a Kids Edition of the Echo Show 5. The big new feature on both models is the camera, but the upgrade is more impressive on the bigger Echo Show 8. It now has the same 13-megapixel sensor that you’ll find on the Echo Show 10. Instead of moving the screen around to point at you as the 10 does, the Echo Show 8 provides a wider, 110-degree field of view. Within that range, it does the pan and zoom trick to keep subjects centered in the frame.
To power that trick and some other new software features, Amazon says there’s a new “octa-core” processor inside the Echo Show 8. Otherwise, it’s the same Echo Show 8 that we reviewed in 2019, with dual speakers and a choice of either white or charcoal gray. It still sells at the same price, $129.99.
The other software tricks include using the camera to detect if a human has walked into the room and then plugging that information into routines (like turning on the lights). Amazon emphasizes that this is an opt-in only feature and it even requires users to manually punch in a code during setup to ensure they really mean to turn it on. It also does its human shape detection locally.
Amazon will also let all Echo 8 and 5 devices turn on Alexa’s security mode, so you can remotely view the cameras from your phone. Finally, the Echo Show 8 is getting new AR effects for Amazon’s own video chat service, including “reactions” like filling the screen with hearts or setting custom virtual backgrounds.
As for the smaller (and more popular) Echo Show 5, the upgrades are less impressive. The camera is doubling in resolution, from one megapixel to two. It won’t have the horsepower to do the follow mode on the camera. However, the Echo Show 5 is getting a permanent price drop; it’s now $84.99. It comes in the same charcoal and white colors but adds a new blue option.
If you want to spend $10 more, you can get a Kids Edition of the Echo Show 5 with a wild print on the rear fabric. That extra $10 also includes a year of Amazon Kids Plus services and a two-year warranty against whatever damage your child can inflict on the thing.
All three versions of the Echo Show should be available for order immediately, but shipping could take a few weeks — even Amazon is not fully immune to chip shortages, it seems. As for fans of the Echo Spot orb, it’s not seeing any updates today and is in all likelihood not going to make a comeback — Amazon tells us that most customers just opted for the Echo Show 5 instead.
We’re in a relatively quiet period for Switch releases while we wait for Skyward Sword’s remaster to release in July. Thankfully, today marks the arrival of a new $10 calculator app on Nintendo’s console, which should stop your machine from gathering too much dust over the coming months. Hell yeah. Math.
The app, which was spotted by Eurogamer, is literally just called “Calculator” and is being published by Sabec. It’s single-player, which unfortunately rules out any team-based calculating, and it works in TV and handheld modes, according to its product page. We’d be remiss if we didn’t point out that the app bears a striking resemblance to the iPhone’s old calculator app, but being charitable, it’s possible that the app is only guilty of drawing inspiration from Dieter Rams’ classic design.
Unlike Sony’s and Microsoft’s consoles, and even Nintendo’s previous machines, the Switch doesn’t have much in the way of non-gaming apps. It doesn’t have Netflix, Spotify, or those other pieces of software that we’ve come to expect will appear on basically any piece of electronics with a screen. But now, finally, it has a calculator app. Thank god.
Recently, I’ve reviewed a large number of Chromebooks aimed at students. They are a target market for many manufacturers, due both to the dominance of Google Classroom across many different grade levels and also to Chromebooks’ affordable price tags (compared to similar Windows and macOS machines).
But adults and professionals like Chromebooks, too. Some may be power users running Linux applications, some may make heavy use of Google Workspace in the office, and some may just like Chrome OS. That’s who the Thinkpad C13 Yoga Chromebook is for: it’s a Chromebook for grown-ups.
That means it breaks some stereotypical “Chromebook” conventions. Mainly, it’s not cheap: it’s not too far from the MacBook Air in price. Lenovo isn’t the first company to try this shtick: Samsung and Google, for example, have both targeted this market with $999 Chrome OS machines in the past, and there are Dell Latitude Chromebooks floating around that are even more expensive.
But the C13 Yoga is my favorite attempt at a premium, convertible Chromebook that I’ve seen to date. It has the ThinkPad features that have made Lenovo so dominant in the business space for so many years: the lightweight and sturdy build, the excellent keyboard, the solid specs, the business-focused privacy features, and more. It’s not perfect, but it’s practical. And ultimately, it works.
Put this Chromebook next to other members of the ThinkPad line, and you could fool me into thinking it was another premium Windows machine. ThinkPads are known for their sturdy builds, and this one is no exception. The chassis is aluminum all around. There’s no flex in the keyboard or the screen — and I can’t remember the last time I said that about a Chromebook. The 360 hinge is sturdy, and there’s no screen wobble at all. The C13 achieves all this without getting too clunky: it’s 3.3 pounds and 0.61 inches thick. Lenovo says it’s been tested against 12 “military-grade” certification methods.
The display on my review unit is a 300-nit 1920 x 1080 IPS panel. The C13 is also one of very few 13-inch Chromebooks that offers a 4K OLED display option — that one gets up to 400 nits. Most people shouldn’t need that one, as the FHD screen is good. It delivers nice colors, good contrast, and impressive details. It does have the cramped 16:9 aspect ratio, something I’ve been glad to see other ThinkPads shifting away from this year.
Elsewhere, you’ll see a number of other signature ThinkPad flourishes. There’s a very comfortable backlit keyboard, including the signature red Trackpoint in the center. (It does come with a standard Chromebook layout, rather than the usual ThinkPad layout, though the inverted-T arrow keys remain.) ThinkPad fans will also recognize the discrete clickers on the top of the touchpad, as well as the match-on-chip fingerprint sensor on the right side of the keyboard deck and the tiny webcam shutter.
There are some unique tidbits as well. There’s a Google H1 security chip inside, which works like the TPM chips that you’ll often find in Windows business laptops. There’s an optional camera on the keyboard deck (in addition to the one on the top bezel) which you can use to snap forward-facing photos if you’re using the C13 in tablet or tent mode. There are two stereo speakers on the bottom of the device which deliver a nice surround quality, though the audio itself is tinny and not great.
But the really exciting thing about the C13 is that it’s the first Chromebook to use AMD’s Ryzen Mobile 3000 C-series processors. AMD introduced this “C-series” last fall as a line specifically designed for Chromebooks. That said, they’re mostly rebrands of older AMD chips — the Ryzen 5 3500C that’s in my C13 model is basically a renamed Ryzen 5 3500U from the regular 3000 Mobile series. This is two generations old now (the 5000 mobile Series came out earlier this year), but it’s still a solid processor for this kind of computer.
The base C13 starts at $909 for 4GB of RAM, 32GB of storage, and an Athlon Gold 3150C processor. That’s a terrible deal for $909, but Lenovo pricing is often weird and randomly discounted and this configuration is currently listed at a more reasonable $590.85. The model I have is listed at $1,247 (but currently available for $810.55) — it comes with the Ryzen 5 3500C, 8GB of RAM, and 256GB of storage. That’s still a bit high for those specs, but it’s a more reasonable value. I appreciate that the storage is a PCIe SSD (rather than the slow eMMC storage that companies sometimes try to sneak into pricey Chromebooks).
This is definitely the best-performing Chromebook I’ve used in quite some time. I used the device as my primary driver for a few days, running well over a dozen Chrome tabs and Android apps, and I almost never felt heat or heard the fans unless I put my ear to the keyboard deck. Nothing slowed the system down, either. I was able to edit a batch of photos in Adobe Lightroom with a dump of tabs and apps open and both a Zoom call and Spotify running in the background, and the experience was just fine. Speaking as someone who’s tested a number of sluggish budget Chromebooks recently, it’s really refreshing to see Chrome OS running this smoothly.
AMD has claimed that its integrated Vega graphics are the best graphics you can get in a Chromebook, and while I can only verify that claim anecdotally, I had a significantly better gaming experience on the C13 than I ever have with an Intel Chromebook. Rest in Pieces, one of my favorite mobile games, is usually a playable-but-stuttery experience on Chromebooks. But it was quite smooth on the C13, without a stutter in sight. Photo editing, in both Google Photos and Adobe Lightroom, was also no problem on this machine.
I ended up running a couple benchmarks to see how this system stacks up to competition. On AndroBench, which measures the speed of the storage, the C13 Yoga Chromebook was well ahead of the pack on the majority of tasks, and head and shoulders above the Samsung Galaxy Chromebook 2. On Geekbench 5, the C13 scored an 890 on single-core and a 2963 on multi-core. While those scores aren’t as good as those we’ve seen from our top Chromebook pick, Acer’s Chromebook Spin 713, (and don’t compare to the likes of the MacBook Air, of course), they’re still close to the top of the Chrome OS pack, beating scores we’ve seen from bothSamsung Galaxy Chromebooks and the Pixelbook Go.
I also found the built-in stylus to be smooth and responsive on this touchscreen, though it was a bit of a pain to remove from its garage.
One disappointment underscores all of this, and that’s battery life. Though the C13 has a reasonably sized 51Wh battery, I only averaged six hours and two minutes of continuous use with the screen at 50 percent. I ran trials using all kinds of Android apps, and trials using just Chrome, without seeing a massive difference. I’ve seen significantly more than that (with the same workload) from all kinds of Chromebooks, not to mention Windows and macOS laptops. This certainly makes me hesitant to recommend that anyone get the 4K screen option — I can’t imagine that most people will get acceptable battery life on those configurations if this is what I’m getting with the FHD screen.
I’d be more willing to let this battery life slide on a budget device (though plenty of budget Chromebooks last much longer than this). But on a $1,247 device, I’m disappointed not to see an all-day life span. Sure, the processor is powerful, and there’s often a trade-off between performance and efficiency. But all kinds of Windows laptops at this price point leave this battery life in the dust.
One other concern: I could never actually get the fingerprint sensor to read my fingerprint. Lenovo says it hasn’t seen this problem before, so it may have been an issue with my unit.
Android apps were hit-or-miss on Chromebooks when I first started reviewing them, but many of them work well on the C13. Messenger used to brick my device every time it got a notification, for example, but it now works just fine. That said, most of the services I use on a daily basis — Twitter, Messenger, Gmail, Reddit, etc. — are equivalent or slightly better experiences in Chrome, and some work-related Android apps (like Slack and Google Docs) are still bad on Chromebooks. So I generally don’t use Android apps too much except for things like Podcast Addict, which don’t really have browser equivalents — but I’m happy to see the ecosystem improving.
The C13 also supports Chrome OS’s tablet mode, which has gotten better, especially with the stylus. It supports various handy Android-esque gestures (swipe up to go home, for example). The device sometimes took a second or so to rearrange and resize all my Chrome windows after I switched it back to clamshell mode, which isn’t the worst thing in the world.
The ThinkPad line is, in many ways, the opposite of what many people consider a Chromebook to be. ThinkPads are traditionally expensive, and they’re very well made. But times are changing (or at least, companies like Lenovo are trying to make them change). Why shouldn’t Chrome OS fans get a ThinkPad option, too?
The C13 Yoga isn’t a perfect machine. The 16:9 screen makes me sad, and the battery life is a big miss. It’s a bit expensive for what it offers, as is often the case with laptops targeting business users.
With that said, the C13 is also the closest thing to a MacBook that I’ve seen yet in the Chromebook space. It has a solid, sturdy build that looks and feels premium. It has a strong processor, an excellent keyboard, and a solid screen, and it comes from a highly respected brand with a devoted base of fans.
So while the C13 may not be the right choice for most people — there are more affordable Chromebooks with better battery life that will be a better buy for most consumers — it’s objectively a neat device that will probably make a certain sect of Chrome OS users happy. If you’re a Chrome OS business user who’s been jealous of the premium chassis that Windows and Mac users get, here’s a ThinkPad for you.
Snap will suspend two anonymous messaging integrations from Snapchat after a lawsuit sought to hold them responsible for a teenager’s death, the Los Angeles Times reports. The lawsuit was filed on Monday by Kristin Bride, the mother of a teenager who died by suicide in June after being bullied on the two apps.
“In light of the serious allegations raised by the lawsuit, and out of an abundance of caution for the safety of the Snapchat community, we are suspending both Yolo and LMK’s Snap Kit integrations while we investigate these claims,” a spokesperson for Snap said in a statement. Representatives for the two apps, Yolo and LMK, did not immediately respond to The Verge’s request for comment.
Yolo and LMK are developed by third-party developers, and they integrate with Snapchat via its Snap Kit platform. LMK lets users create polls and Q&As for their Snapchat friends to answer, while Yolo is focused on Q&As. Both services let users send messages anonymously which facilitates cyberbullying to such a degree that the apps should be considered dangerous, the suit alleges.
Last year, when Carson Bride was found dead by his family, his phone history showed that he’d searched how to “Reveal YOLO Username Online” that same day. The lawsuit alleges that over a period of several months he had been receiving anonymous bullying messages, which made sexual comments and taunted him over incidents at school.
As of this writing Yolo, which the suit says is owned by Yolo Technologies, appears to no longer be available on either the Apple App Store or Google Play Store. LMK, which is developed by LightSpace, is still available for download onboth mobile app stores, but attempting to share content to Snapchat generates an error message.
Both apps make various promises about protection against bullying on their platforms, the LA Times notes. Yolo reportedly warns users during setup that it has “no tolerance for objectionable content or abusive users,” while an FAQ from LMK says it goes “to great lengths to protect our community” with a combination of automated and human moderation. The plaintiffs argue that the two apps violate consumer protection laws by failing to enforce their own terms of service.
Worries that Yolo could be used for bullying have been around for years. As early as 2019, TechCrunch wrote that the app’s model could be open to “teen misuse.” Mashable noted that a previous anonymous messaging app Sarahah was eventually kicked off app stores over its bullying problem.
Section 230 of the 1996 Communications Decency Act generally protects social media companies from the actions of its users. But Section 230 typically applies to posts rather than app functionality, and US courts have recently shown willingness to hold broader platforms liable when a specific integration proves dangerous. Last week, an appeals court ruled that Snap can be sued over a speed filter, following allegations that it encouraged reckless driving. The claim was that the design of the product encouraged dangerous behavior, with users believing that hitting speeds of 100 miles per hour would unlock an achievement.
The Bride family is seeking damages on behalf of all 92 million Snapchat users, and for the two apps to be banned from the market until they can prove they have effective safeguards in place. The lawsuit specifically says it doesn’t want to punish the users who sent the bullying messages, only the companies that facilitated them; namely Snap and the developers of Yolo and LMK:
The claims in this action are not about third-party users’ communications; hence, this action does not focus on the users’ communications themselves nor does it seek to punish the senders of the bullying and harassing messages.
Rather, the claims here are about how the anonymous messaging apps designed and distributed products and services that are inherently dangerous, unsafe, useless. For decades, anonymous messaging apps been known to cause severe and fatal harm to teenagers, hence, the harms caused by Defendants’ apps were foreseeable.
Volkswagen will start testing its new autonomous vehicles in Germany this summer, the company announced Wednesday. The German automaker’s electric ID Buzz vans will use hardware and software developed by Argo AI, a Pittsburgh-based startup that is backed by Ford and VW. The aim is to launch a commercial delivery and micro-transit service in Germany by 2025.
Executives from VW and Argo convened a press conference this week to provide an update on their partnership, which was first announced in 2019 as an extension of VW’s “global alliance” with Ford. And while much of what they discussed was already known, it did provide a closer look at the timeline for launching a revenue-generating service using VW’s vehicles and Argo’s autonomous technology.
Argo, which has been testing its vehicles in the US with Ford for the last few years, said it would be launching the fifth generation of its automated driving technology with the VW ID Buzz, which is the electric version of the automaker’s iconic microbus. Bryan Salesky, the startup’s founder and CEO, praised the collaborative nature of Argo and Volkswagen’s partnership.
“We’re building our technology and partnering with Volkswagen in a way that really sets us apart from what others are doing,” Salesky said. “And we think it really puts us in a position to deliver a safe, smart, and scalable product to deliver on the promise of autonomous driving.”
That work has already started. Earlier this year, Argo and VW developed a prototype minivan using the German company’s MEB electric vehicle platform inside the body of a VW T6 Transporter and Argo’s AV technology, including LIDAR sensors, radar, and cameras. In addition, Argo’s software enables the vehicle to “see” its environment, plan for its next steps, and predict the movements of other vehicles and pedestrians on the road. This, in combination with Argo’s sensor suite, allows for automated driving at low and high speeds, Salesky said.
VW said that it plans to put the vans in service as a ride-sharing fleet under its subsidiary Moia. Since 2017, Moia has been operating a fleet of electric vehicles as part of its “ride-pooling” service in Hamburg, where it has served 3 million customers to date. Those customers have provided a treasure trove of feedback that Moia CEO Richard Henrich says will come in use as the company shifts to a completely autonomous fleet by 2025.
“We have learned in recent years that both customers and cities have really high and very specific expectations towards future autonomous ride-pooling systems,” Henrich said. “Customers, on the one hand side, expect ride-pooling to be as easy, convenient, and reliable as riding their own car… But cities, on the other hand, expect ride pooling to help alleviate traffic congestion.”
The AV industry has been consolidating rapidly over the past year, with many companies being acquired or merging with other companies. It’s a mad dash to keep businesses afloat in the face of lengthening timelines and steep operational costs with little expectation for revenue generation in the near term. Robotaxis, in particular, are seen as being further out than most companies are predicting. VW and Argo say they remain bullish about their ability to hit the target date.
“There is a long way to go still until this high tech becomes an enormous growth market,” said Christian Senger, VW’s senior VP for commercial vehicles.
Judges from the European Union’s second-highest court have rejected a €250 million ($300 million) tax bill lodged against Amazon in 2017 as part the bloc’s ongoing fight against US tech giants.
The case was one of a number spearheaded by Margrethe Vestager, the European Commissioner for Competition, in which sweetheart tax deals given to powerful corporations have been framed as a form of illegal state subsidy. The most notable of these was a 2016 case in which Apple was ordered to pay Ireland €13 billion ($14.9 billion) in back taxes. This decision was annulled in 2020 by the same court involved in today’s ruling.
The Amazon case can be traced back to 2006, when the e-commerce giant established a labyrinthine tax structure in Europe that allowed it to funnel revenue from all EU sales through a subsidiary based in Luxembourg. Internally, Amazon referred to this as Project Goldcrest, named after Luxembourg’s national bird.
In 2017, the European Commission ruled that this structure was illegal and had allowed Amazon to avoid around €250 million in taxes. “Luxembourg gave illegal tax benefits to Amazon,” said Vestager at the time. “As a result, almost three quarters of Amazon’s profits were not taxed. In other words, Amazon was allowed to pay four times less tax than other local companies subject to the same national tax rules.”
But in a ruling this morning announced by the General Court of the European Union, judges found that the Commission “did not prove to the requisite legal standard that there was an undue reduction to the tax burden” of Amazon’s Luxembourg subsidiary. The ruling is a significant win for Amazon, and a blow for EU politicians hoping to rein in US tech giants.
Judges from the European Union’s second-highest court have rejected a €250 million ($300 million) tax bill lodged against Amazon in 2017 as part the bloc’s ongoing fight against US tech giants.
The case was one of a number spearheaded by Margrethe Vestager, the European Commissioner for Competition, in which sweetheart tax deals given to powerful corporations have been framed as a form of illegal state subsidy. The most notable of these was a 2016 case in which Apple was ordered to pay Ireland €13 billion ($14.9 billion) in back taxes. This decision was annulled in 2020 by the same court involved in today’s ruling.
The Amazon case can be traced back to 2006, when the e-commerce giant established a labyrinthine tax structure in Europe that allowed it to funnel revenue from all EU sales through a subsidiary based in Luxembourg. Internally, Amazon referred to this as Project Goldcrest, named after Luxembourg’s national bird.
In 2017, the European Commission ruled that this structure was illegal and had allowed Amazon to avoid around €250 million in taxes. “Luxembourg gave illegal tax benefits to Amazon,” said Vestager at the time. “As a result, almost three quarters of Amazon’s profits were not taxed. In other words, Amazon was allowed to pay four times less tax than other local companies subject to the same national tax rules.”
But in a ruling this morning announced by the General Court of the European Union, judges found that the Commission “did not prove to the requisite legal standard that there was an undue reduction to the tax burden” of Amazon’s Luxembourg subsidiary. The ruling is a significant win for Amazon, and a blow for EU politicians hoping to rein in US tech giants.
The US has agreed not to put Xiaomi on a blacklist blocking investment in the Chinese tech company, undoing a move made in the last week of the Trump administration. Xiaomi sued the US government over the designation, but just filed a joint status report with the Department of Defense saying the two parties “have agreed upon a path forward that would resolve this litigation without the need for contested briefing.”
The US didn’t appeal the preliminary injunction against it. Xiaomi told The Verge in January that the company “is not owned, controlled or affiliated with the Chinese military, and is not a ‘Communist Chinese Military Company.’”
Now it looks like the matter will be settled on better terms. Xiaomi and the DoD have “agreed that a final order vacating the January 14, 2021 designation of Xiaomi Corporation as a CCMC … would be appropriate,” according to the court filing. The two parties now plan to negotiate on an order vacating agency action, with a joint proposal expected before May 20th.
The US Patent Office issued utility patent number 11 million today, granting the milestone number to a patent entitled “repositioning wires and methods for repositioning prosthetic heart valve devices within a heart chamber and related systems, devices and methods.”
Even without understanding exactly what that means, it just screams progress, doesn’t it? Prosthetic heart valves? Surgery? This truly is the future that the patent system enables.
There have, however, been accusations that the patent office cherry-picked which invention would get the most notable number in years (patent 10 million was awarded back in 2018), aiming to give it to something exciting, rather than bland like, say, a soybean.
Could it really be true? To see if that was the case, I looked at the patents that were granted before and after it, to see if they really were as boring as Twitter alleged. And I found that they absolutely were. The prosthetic heart valve-related patent is, in fact, c-c-c-combo-breaking what would otherwise be a string of six soybean-related patents.
That’s not all, though. I looked back at patents 10,999,990 through 10,999,999, and before the soy starts, there’s a string of patents about corn, sorghum, and cucumbers. Yeah, it’s not a lot sexier. Going the other direction, patent 11,000,005 is for an edible (non-soy) bean called COWBOY, and 11,000,006 is about a tomato variant. Then things just start getting weird, with pet doors and farm equipment.
Whether the patent office purposefully stole soy’s thunder probably isn’t something we’ll ever know for sure, but to me the evidence is pretty compelling. The Patent Office sure made a big deal about 11 million on Twitter, tweeting about it more than a few times. Surely it must’ve known it wouldn’t have been as exciting if the celebrated patent had been one of six soybeans.
I’ll be keeping an eye out for any funny business around patent number 11,111,111 as this is, obviously, a very serious issue.
Ever get a text message informing you that you’ve won a prize — from Amazon itself? I certainly have, and I’ve even known a person or two who fell for those scams. Now, Amazon is attempting to hunt those scammers down, with a new lawsuit aimed at unmasking 50 unknown defendants in federal court.
Here’s the text message I received the other day from one such scammer:
And here’s where it took me when I clicked:
According to Amazon’s legal complaint, which you can read in full below, the scam uses Amazon’s logos, web design, and meaningless “surveys” to trick people into buying products (ones that aren’t actually from Amazon), seemingly for cheap. The scammers aren’t necessarily the ones selling those products, either — but because they’re acting as referrers, they get a finder’s fee in the form of affiliate marketing commissions. The Better Business Bureau says it received 771 reports of scams impersonating Amazon in 2020, second only to those annoying calls claiming to be the Social Security Administration.
The tricky part is finding the scammers, because Amazon doesn’t actually know who they are, just which domain names they used to host the scam. But by filing a lawsuit against these 50 John Doe plaintiffs, Amazon may be able to get a doe subpoena to unmask their identities. The company tells us that’s worked in the past; in 2018, the company filed a John Doe lawsuit against a very similar scam and was able to track down at least four defendants. Each case ended with a permanent injunction, according to court records, and Amazon says it’s won at least $1.5 million in settlements so far.
The lawsuit also gives Amazon time to find them (against the statute of limitations) and puts the scammers on notice, just in case they might like to stop before they get hauled into court.
Some Destiny 2 players have been able to play with people on other platforms after developer Bungie inadvertently switched the feature on. Bungie previously said that crossplay would be coming in fall 2021 for every platform that the game is available on, including PC, PS4/5, Xbox One, Xbox Series X/S, and Google Stadia.
The Verge’s Tom Warren spotted that the crossplay feature went live and was able to load up a game with players on both PC and Stadia.
Bungie community manager Cozmo, however, confirmed that crossplay was not supposed to have launched yet. “We are seeing reports that some players are able to get a sneak peek at Crossplay,” he wrote in a tweet. “This isn’t meant to be live yet and is not representative of the full experience. We will be implementing a fix to remove public access later this week, but in the meantime feel free to partake.”
Bungie just today launched Destiny 2’s latest seasonal update, Season of the Splicer, which brings the usual array of new content and activities. It’s unfortunate that crossplay appears to have been an unintended inclusion — particularly if you’re a Stadia player in need of more people to join your games — but for now it sounds like you’ll be able to check it out a little longer.
The FCC has approved $7.17 billion in funding to help students, school staff, and library patrons access hotspots and connected devices to use at home. The Emergency Connectivity Fund Program will allow schools and libraries to purchase equipment to be used off-site — and to get reimbursement for equipment already purchased to address remote learning needs during the pandemic.
The new fund will use processes already in use by the E-Rate program, which currently helps schools and libraries pay for broadband internet. Qualifying schools and libraries will be able to purchase hotspots, routers, tablets, and computers, among other devices necessary for remote learning (though smartphones don’t qualify). Students and patrons can take them home and use them, rather than huddle outside of a Taco Bell in order to finish their homework.
This well-known “homework gap” that has seen millions of kids struggle to participate in remote learning is an issue that FCC Chairwoman Jessica Rosenworcel is keen to address. Since the very beginning of the pandemic, she has called on the FCC to help schools and libraries get hold of equipment sorely needed in many homes across the country — the FCC quotes a study from last spring that found about nine million public school students live in homes lacking both adequate internet access and a suitable device for remote learning. With this new funding and the Emergency Broadband Benefit Program already underway, there may be a bit of relief in sight for these households.
Last week, the judge in Epic v. Apple asked whether Epic really had an antitrust case against Apple, or whether it just wanted to help kids make impulse purchases. Judge Yvonne Gonzalez Rogers was talking about the importance of where and howpeople pay for their apps, and today she continued that line of questioning to the point of suggesting a kind of App Store policy change that Epic never originally put on the table.
Epic sued Apple for banning Fortnite from iOS over a direct payment system for V-Bucks, Fortnite’s in-game currency. Epic called that unfair and monopolistic. But Apple argued that it lets developers sell in-app purchases through its Safari browser, even at a discounted price — so there’s no lockout. And while Epic itself has focused on explaining why web apps aren’t a good substitute for native ones, its expert witness David Evans brought up another major issue: anti-steering rules.
Anti-steering rules (in this context) refer to rules that ban developers from pointing users outside of Apple’s ecosystem. iOS developers can’t add links or references telling people to get a better deal on their website, or send emails to accounts created through Apple. Android has these rules too, but it delayed a serious crackdown on them until this fall — and since you can install third-party stores and sideload apps on Android, developers in Google’s ecosystem have more options in general.
Evans, an economist, was originally trying to explain in-app purchases by comparing Apple to a ride-hailing app like Uber, comparing an app developer to a Uber driver who had struck up a good relationship with a customer. The customer wanted to start directly hiring the driver, but the ride-hailing company (representing Apple) demanded that the customer keep paying through its app.
Judge Rogers didn’t appear convinced. Buying V-Bucks through a browser, she noted, seemed a lot like a passenger directly paying a driver. “There’s nothing about that distribution process that impacted differently given your Uber example.”
Evans basically responded that in this analogy, cab drivers can’t even do the equivalent of giving passengers their phone numbers. “Epic is not able to message the iOS app user and tell them ‘You can go to the web and get this more cheaply.’ Or ‘I really encourage you to go to the web and get V-Bucks there,’” Evans objected. The problem, he said, was the combination of requiring Epic to use Apple payment processing, plus a “whole set of barriers” that make it harder to tell users they have an alternative.
These anti-steering provisions have come up in the trial before — yoga app maker Yoga Buddhi complained about them last week. But this time, Rogers offered an obvious followup question. If there was no anti-steering provision, she asked, would Epic still have a problem with Apple’s system? “The customer could choose whether they wanted to stay and make the purchase on the app or do it some other way, right?”
Evans admitted that nixing the anti-steering provisions “wouldn’t eliminate the market power that Apple has here, but it would certainly diminish it.” He said it would be more helpful for some apps than others — it’s pretty good for subscription-based companies that have a separate website, for instance, and less useful for mobile-only games that rely on a stream of microtransactions. But he acknowledged he hadn’t conducted a specific study of the topic, so he wasn’t sure exactly how big the issue would still be.
Later in the day, economist Susan Athey raised a different issue with App Store exclusivity. The App Store lets users sign up for subscriptions, but if they switch to an Android phone, they have to either cancel their subscription or keep managing it through Apple. Athey was using this to explain why a third-party app store would be useful, should Apple ever allow one to exist — if you could access the same purchase from both big phone platforms, the same way you can get your old iOS apps on a new iPhone, switching devices could become much easier.
But Rogers suggested again that if developers could just tell people to sign up through the web, “then there wouldn’t really be the same kind of need for the kind of cross-platform app store that you’re talking about.” After all, services like Netflix already direct people to sign up through their websites — Apple and Google just really don’t like it, and they try to discourage the practice without an actual ban. Similar to Evans, Athey conceded that there’d be a “big benefit” in letting app makers “alert people to the most efficient way to pay.”
Athey argued that “consumers do get klutzy and disconnected and sensitive to delays when trying to complete that type of activity,” and telling people to go use a web browser doesn’t solve that problem. But Rogers could easily decide that inconvenience and enforced ignorance are separate issues, and that only the latter is a serious antitrust concern.
Getting rid of anti-steering provisions would be a comparatively small win for Epic, which wants to put full-fledged third-party App Stores on iOS. But it’s a smartphone ecosystem feature that’s often overshadowed by bigger antitrust complaints — and Epic v. Apple is putting it under the spotlight.
CryptoPunks were one of the earliest NFT projects, and they’ve become increasingly valuable as collector’s items. The project, created by Larva Labs in 2017, offered 10,000 small pixel-art portraits of people, zombies, aliens, and apes. Each one was algorithmically generated and features different attributes, like their hairstyle, glasses, or hat. Some traits are rarer than others, and those tend to make for more valuable CryptoPunks.
This sale of nine CryptoPunks comes from Larva Labs itself. The group initially kept 1,000 of the NFTs for themselves and gave away the rest. The bundle of nine includes one CryptoPunk with a particularly rare trait: CryptoPunk 635, the one with the blue face and sunglasses, is one of just nine “alien” punks in the entire series. Another of the nine sold, CryptoPunk 2, the one with wild black hair that looks a bit like a heart, has the distinction of being number two out of a 10,000-work series.
CryptoPunks have soared in popularity since NFTs started blowing up in February. Two alien punks sold in March for more than $7.5 million each. Another seven have sold for more than $1 million, all just in the past few months.
T-Mobile promised its $26 billion merger with Sprint would add new jobs to the economy literally every single day, but a report from The Wall Street Journalconfirms the newly combined company actually employs fewer people now than it did before the merger.
So, let me be really clear on this increasingly important topic. This merger is all about creating new, high-quality, high-paying jobs, and the New T-Mobile will be jobs-positive from Day One and every day thereafter. That’s not just a promise. That’s not just a commitment. It’s a fact.
“Jobs-positive” would not be how I’d describe T-Mobile’s employment practices. The company employed 5,000 fewer people by the end of 2020 than it did before the merger, the Journal writes. And before you blame the COVID-19 pandemic, you should know that wireless executives who spoke to Journal said that shrinking the number of jobs was always part of the plan — the pandemic just sped up planned job loss on the retail side of T-Mobile’s business. In fact, T-Mobile laid off hundreds of Sprint’s inside sales team just a few months after its merger was completed.
In addition to the “jobs-positive from day one and every day thereafter” claim, which now appears to have been a blatant lie, T-Mobile also specifically promised it would add 11,000 jobs by 2024. Instead, it appears to be moving in the opposite direction.
We were warned this would happen. Wall Street analysts and labor unions both predicted anywhere between 24,000 to 30,000 jobs could be lost if T-Mobile and Sprint got what they wanted. Telecom industry watcher Karl Bode wrote multiple articles for The Verge highlighting how meaningless the merger promises looked, comparing to Sprint’s own history of post-merger layoffs.
For some context, when Sprint was given the go-ahead to merge with Nextel in 2005, it made its own claims about how great its merger would be for the economy. Bode writes:
Government filings had promised the FCC that the deal would “generate economic growth and jobs in the United States.” Then-Sprint CEO Gary Forsee told media outlets in 2005 that employees “shouldn’t expect to see a headline that there’s thousands of jobs that are going to be cut on the first of November or any time along the way.” By the end, more than 8,000 employees would lose their jobs.
Sounds familiar, doesn’t it?
T-Mobile’s other merger obligations, like helping Dish become a viable fourth carrier option, also seem like they might just be talk. Dish CEO Charlie Ergen recently told the FCC that if T-Mobile moves forward with its plans to shut down its older 3G CDMA network, Dish’s Boost Mobile customers (who rely on Sprint’s network for service) would be greatly harmed. Ergen stresses that for customers who need the less expensive phone plans Boost provides, upgrading to a 4G or 5G-capable phone isn’t a small expense — it could force some people to go without.
As far as jobs are concerned, current T-Mobile CEO Mike Sievert does tell the Journal that the company plans to fill 6,000 open positions as the world recovers from the pandemic. That might possibly bring T-Mobile/Sprint back up to its pre-merger numbers, but it’s now even harder to believe the company will add an additional 11,000 jobs by 2024.
The first commercial-scale offshore wind project in the US just got the green light from the Biden administration. The approval has the potential to dramatically grow the nation’s wind energy sector after years of regulatory limbo for proposed offshore projects.
The Interior Department (DOI) granted the Vineyard Wind project permission to install up to 84 turbines off the coast of Massachusetts. Once completed, the project will be able to generate up to 800 megawatts (MW), enough electricity for 400,000 homes. That’s a dramatic scaling up of existing offshore wind capacity in the US. There are currently only two small developments off the East Coast and those can onlygenerate a combined 42 megawatts of electricity.
The Biden administration has big ambitions for offshore wind. It set a goal of getting 30,000 MW of energy from offshore wind by 2030. That’s part of a bigger plan to tackle climate change by reaching 100 percent clean electricity by 2035.
“A clean energy future is within our grasp in the United States. The approval of this project is an important step toward advancing the Administration’s goals to create good-paying union jobs while combatting climate change and powering our nation,” Interior Secretary Deb Haaland said in a statement.
Vineyard Wind first submitted its construction and operations plan for federal approval in 2017, although plans for the project have been in the works since 2009. Now that the project has been greenlit, it’s expected to be operational by 2023, according to the project’s developers, Avangrid Renewables and Copenhagen Infrastructure Partners.
Vineyard Wind receiving a Record of Decision from the DOI’s Bureau of Ocean Energy Management is likely a good sign for more than a dozen other offshore wind projects awaiting federal approval. It’s good news for adjacent industries, too, including shipbuilding.There’s been a “chicken and egg” situation for the offshore wind industry: it needs specialized vessels to construct new projects, but shipbuilders were hesitant to invest in new builds until major projects received permits to move forward. Vineyard Wind’s approval could be wind in the sails to get both industries really moving.
Wyze, maker of $20 smartwatches and $30 video doorbells, has announced a new pair of wireless earbuds that include active noise cancellation and a Qi wireless charging case for only $60 (via Phandroid). They’re called the Wyze Buds Pro and come in at an incredibly low price when compared to the competition: you could buy four pairs of these for the price of Apple’s AirPods Pro, which have similar features.
For comparison: Anker’s $60 earbuds come without ANC or wireless charging (instead using Micro USB, where the Wyze buds also include a USB-C port).
Wyze’s announcement video takes a decidedly self-aware bent, but while it does mention the sweat resistance, transparency mode, and wind noise reduction, it doesn’t mention one of the funniest things about the product’s name: it’s called “Pro,” but there aren’t any Wyze Buds non-pros to compare them against (though Wyze does say a pair is coming).
I may poke fun, but, on paper at least, these earbuds are competing way above their price. The SoundCore Liberty Air 2 Pro (what a name) from Anker and Echo Buds second-gen from Amazon are both considered to be budget picks for true-wireless earbuds with ANC, and those are in the $100+ price range. Amazon charges an extra $20 for a wireless charging case for the Echo Buds, something that Wyze includes. To add insult to injury for Amazon, the Wyze Buds Pro also have Alexa built in.
Of course, some of the most important functions of earbuds are how they sound and whether the ANC is actually any good. Given that we haven’t actually heard them yet, these earbuds haven’t proven themselves in either category. But if they end up being good, or even passable, they’ll be a solid deal for those who want all the fancy features without having to pay top dollar.
The Wyze Buds Pro are currently available for preorder, and Wyze says they will start shipping in July.
Instagram is making it easier to address people by their defined pronouns. The company announced today that it’s allowing people to add up to four pronouns to their profile, which they can then choose to display publicly or only to their followers. (Users under 18 will have this setting turned on by default.) Instagram says people can fill out a form to have a pronoun added, if it’s not already available, or just add it to their bio instead. Instagram says this is available in a “few countries,” but doesn’t specify further. We’ve reached out for more information and will update if we hear back.
A couple Verge staffers already have the pronouns setting available to them, suggesting it’s live in the US. You can get a better sense of the feature’s user flow in the screenshots below, courtesy of news writer Jay Peters.
Other platforms also allow their users to add pronouns to their profiles. Dating apps, like OkCupid, have already introduced the feature, as have other apps like Lyft. Interestingly, Facebook allowed users to define their pronouns starting in 2014, although the feature limited people to “he/him, she/her, and they/them.” This appears to still be the case while Instagram will offer more options.
Instacart is shifting to a primarily remote-first workplace, but employees say the company has arbitrarily chosen which teams have to come into the office in a way that hurts junior employees. Workers on the central operations team — which includes logistics and trust and safety — have been told they need to return to the San Francisco office three days a week starting in September and were not given a clear reason why.
“To a lot of employees this policy excluding us from permanent remote work is being interpreted as ‘we trust the majority of the company to be able to work remote permanently, but not these specific employees,’” says one employee who asked to remain anonymous for fear of professional retaliation. “Considering everyone has been remote for over a year, it’s very disappointing.”
Last week, the company announced that 70 percent of the workforce would be remote. “We asked our employees what they wanted the future of work at Instacart to look like, their response? Make it flexible,” a newly updated careers page says. “We know there’s no one-size-fits-all approach for how we do our best work, so we’re introducing a hybrid work environment for when it’s safe for our offices to re-open.”
In response to questions from The Verge, an Instacart spokesperson said: “Central Operations employees often work with sensitive proprietary information and data that is managed on-site in Instacart’s offices.”
The team, estimated to be around 100 people, includes many entry-level employees and workers who are new to the tech industry. Employees say the power imbalance makes it difficult to push back on the remote work policy. “A lot of the roles are easily replaceable,” says another worker who asked not to be named. “They can happily find someone else to fill that role if you’re not okay with the policy.”
On the anonymous chat app Blind, one user wrote that managers on the central operations team were told to “silence the issue, when reported, rather than find a solution.” While anyone with an Instacart email address can post on Blind, the comment scared employees who worried they’d be fired if they questioned the mandate. Two commenters say they plan to quit if the policy doesn’t change.
An Instacart spokesperson said it has never told managers to silence the remote work issue, adding: “We always encourage employee feedback on these policies and will continue to create forums for open discussion to ensure every Instacart employee feels engaged, productive, and successful.”
In an internal email obtained by The Verge, an Instacart director said in-person work was a “foundational element to professional growth, team cohesion, cross-collaboration, and sustained performance over time.” But the requirement does not apply to most other teams at the company, nor does it extend to senior managers in the central operations organization. Those employees are able to work remotely, popping into the office a “percentage of time” throughout the month, according to the internal note.
The tension between Instacart employees and management highlights the quandary that many tech companies will likely face as they begin to reopen their offices. While organizations like Twitter and Coinbase have committed to going fully remote, others are attempting a hybrid approach that will doubtless leave some workers frustrated.
That’s partly because many tech workers have moved outside of San Francisco. Natalie Holmes, a research fellow at the California Policy Lab, told the Los Angeles Times that the city was experiencing “a unique and dramatic exodus” amid the coronavirus pandemic.
Instacart employees on the central operations team were already upset about having to return to in-person work three days a week when theSan Francisco Business Times published an article last week announcing most of the company would be remote. “The article put it in perspective that we were basically the only people who would be required in the office,” the anonymous employee says. “That threw the reason of ‘cross-collaboration’ out the window, since the teams we work with won’t be there.”
The past year, a lot of people were laid off or otherwise unable to pay for basic necessities. So the stimulus package passed by Congress in December included a provision to pay for broadband and other basic tech for those who, because of job loss or other financial difficulties, can’t afford to pay for it on their own. And starting this week, if you qualify, you can take advantage of it.
The Emergency Broadband Benefit Program is being administered by the FCC and offers a temporary discount on monthly broadband bills — up to $50 a month (or $75 if your household is on qualifying Tribal lands). If your income qualifies, you can also get a one-time discount of up to $100 for a computer or a tablet.
It’s not a lot, considering how much tech costs these days, but every little bit helps. And applications for that discount will be available starting tomorrow, May 12th.
Emergency Broadband Support Center P.O. Box 7081 London, KY 40742.
One warning: if you think you may qualify, don’t put off sending in the application. This is a temporary program, and you lose your benefit when the fund runs out of money, or “six months after the Department of Health and Human Services declares an end to the COVID-19 health emergency” — whichever is sooner.
eBay is now allowing NFTs to be sold on its platform, making the digital collectibles available side by side with physical ones. Whether you’re looking for a physical Dogecoin replica or a digital representation of Elon Musk holding Doge, eBay is apparently now the place to get both.
At the moment, eBay wants to make sure that NFTs are listed by trusted sellers, and only in certain categories like trading cards, music, entertainment, and art. The company does say, though, that it hopes to expand its policies and tools in the future to allow more categories after it’s gathered feedback from the community with the current crop of NFTs.
The blog post also mentions future updates to allow “blockchain-driven collectables,” though it doesn’t expand on what that means outside of NFTs.
eBay’s CEO said earlier this month that the company would be open to accepting cryptocurrencies in the future, but at the moment the NFTs being sold on the platform seem to be using its standard payment system linked to a credit card or PayPal account.
The Acer ConceptD 7 Ezel is a computer I will never own. But I really, really wish I could.
Artists, creators, and engineers who are looking for a powerful high-end convertible have all kinds of options on today’s market. But only Acer’s ConceptD line can fold in six different ways. There are not one, but two hinges attached to the display: a traditional clamshell hinge and another one in the middle of the lid that enables the screen to rotate outward. By using the two hinges in tandem, you can put the screen in nearly any position you want. This unique form factor makes the ConceptD 7 Ezel unlike any other laptop on the market.
There are other things that separate the Ezel from something like a MacBook, of course. It also has a sleek look with an attractive finish, a gorgeous 15.6-inch 4K UHD touch display, a built-in Wacom EMR pen, and all the ports you need. The chips on the inside are quite powerful. But you can find similar benefits in many convertibles that are half the price. The people who should shell out thousands of dollars for this device are those who have a need for the combination of its unique form factor and large screen — and the rest of us can be jealous of them from afar.
Before ogling too much over this form factor, you might want to know how much it costs. The $2,499 base model comes with an Intel Core i7-10750H, an Nvidia GeForce RTX 2060, 16GB of RAM, and a 1TB SSD. For $2,999.99, you can bump the graphics up to a GeForce RTX 2070 and 2TB of storage. I was sent the top model, which has a Core i7-10875H, 32GB of RAM, and a GeForce RTX 2080 Super Max-Q, for a whopping $3,999.99. These components are both a generation old — Acer hasn’t refreshed the ConceptD with the latest chips yet — but they still deliver solid performance, as you’ll see later on.
These prices will make the ConceptD 7 Ezel an unrealistic purchase for most people, but there’s a 14-inch ConceptD that’s more affordable if you’re interested in this form factor. For those whose work involves professional design and video editing, CGI, machine learning, and the like, Acer also sells a ConceptD 7 Ezel Pro with an Nvidia Quadro GPU. Those are expensive, and people whose work requires a Quadro likely know who they are.
There are all kinds of ways you could theoretically arrange the ConceptD, but Acer has defined six. There’s Laptop (self-explanatory), Pad (tablet mode), Float (screen facing forward, hanging above the keyboard deck), Stand (screen facing forward, forming a tent shape over the keyboard deck), Share (screen facing upward, parallel to the keyboard deck), and Display (clamshell shape, but with the screen facing away from the keyboard).
I started out using the Ezel in Laptop most of the time, but Float grew on me quickly. It brought the screen much closer to me — it’s pretty far away in Laptop mode, given the size of the keyboard deck. I can see the use cases for the other modes as well: I’d love to use Stand to take notes during a lecture, for example, and Share could be useful for drawing while standing at a desk. The one form I can’t really see myself using is Pad because, at 5.6 pounds, the Ezel is too heavy to practically hold as a tablet unless you’re swole.
The one hiccup I ran into is that the screen is very top-heavy. A few times when I picked the device up, the screen would start to fall forward and I’d have to catch it to keep the lid open. My preferences for Windows tablet mode vs. Windows desktop mode also didn’t quite line up with the device’s. It stayed in desktop mode when in Stand, for example, but I’d prefer it switch to Tablet Mode in that form since the keyboard isn’t accessible.
The fact that these form factors are useful, of course, doesn’t mean that most people needthem. Convertibles like the Dell XPS 13 2-in-1 can emulate most of these positions as well (Float and Stand are the really unique ones). The Ezel is really meant for people who will be using the nontraditional forms a lot. For those folks, it has two main benefits: moving the screen around is quite smooth and seamless (you don’t have to use two hands to flip the whole machine around, as you would with a 2-in-1 workstation), and the hinge is also sturdy enough that you can draw in Float and Share with no wobble at all. Of course, this sturdiness comes with a big weight penalty, in addition to its price premium — the Ezel is much heavier than most convertible machines.
That extra heft isn’t for nothing — there are some serious fans in this device. Specifically, there are two “4th-Gen AeroBlade 3D” fans in addition to three heat pipes, and there are vents all over the place including the sides of the case and above the keyboard. The system (which Acer calls its “Vortex Flow” design) did a good job of keeping the chassis cool during my day-to-day work — the bottom sometimes got warm but was never uncomfortably hot, and I never felt much heat on the keyboard or palm rests.
Acer ConceptD 7 Ezel benchmarks
Cinebench R23 Multi
Cinebench R23 Single
Cinebench R23 Multi looped for 30 minutes
Geekbench 5.3 CPU Multi
Geekbench 5.3 CPU Single
Geekbench 5.3 OpenCL / Compute
PugetBench for Premiere Pro
The fans had trouble keeping pace with the CPU, though. Temperatures stayed solidly in the mid-70s to mid-80s (Celsius) during a 30-minute loop of Cinebench — but throughout several runs of a five-minute, 33-second 4K video export in Adobe Premiere Pro, I saw it jump up to the mid-90s, and even high-90s often. Cinebench scores did decrease over time, and export times also got slower.
The ConceptD took two minutes and 55 seconds to complete the video export, which is one of the fastest times we’ve ever seen from a laptop. The Dell XPS 15 with the same processor and a GTX 1650 Ti took four minutes and 23 seconds (though different versions of Premiere Pro can impact export times, so synthetic benchmarks such as Cinebench are more precise for direct comparison).
I also ran PugetBench for Premiere Pro, which measures a device’s performance on a number of real-world Premiere Pro tasks, and the ConceptD scored a 604, which beats the XPS 15 as well. The ConceptD also solidly beats the XPS on Geekbench 5 across the board. The XPS isn’t exactly on a level playing field here, since it has a weaker GPU — these results just illustrate the increased performance that the ConceptD will give you for the extra money. Acer’s machine did lose to Apple’s M1 MacBook Pro in both single-core tests, which underscores how powerful Apple’s processor is in single-core workloads.
The Ezel comes with some software features tailored to creative work as well. In Acer’s ConceptD Palette app, you can swap between Native and Adobe RGB color presets, as well as customizable profiles. You can also monitor CPU, GPU, and memory usage to see how much power your apps are using, and you can toggle between various split-screen layouts if you’re multitasking.
Acer says it’s worked with developers to “optimize” the device to work with various software including Premiere Pro, After Effects, Maya, Revit, and KeyShot. You could also run games on the ConceptD, but it wouldn’t be the best choice since the screen is just 60Hz and won’t be able to display very high frame rates.
As is often the case with big workstations, the Ezel’s battery life isn’t amazing. I averaged four hours and five minutes of continuous use with the screen around 200 nits of brightness. That’s not unexpected, considering the high-resolution display and the discrete GPU, but it’s worth noting that you’ll probably need to bring the hefty brick with you if you’re taking the Ezel out and about.
Elsewhere, the ConceptD 7 is a fine laptop to use. The keyboard is a bit flatter than I prefer but comfortable enough. The backlighting is a dark orange color (Acer calls it warm amber) that looks nice against the white deck. The touchpad is a bit small for a laptop of this size and I sometimes hit plastic while scrolling, but it is quite smooth. The chassis itself is a sturdy magnesium-aluminum alloy, and it’s covered in a nice white finish that Acer says is “highly resistant” to dirt and sun exposure. There’s a fingerprint reader built into the power button on the left side of the chassis, which works just fine.
I enjoyed using the built-in stylus, though it’s a bit stiff to pull out of its garage and requires a substantial nail. The pen uses Wacom EMR technology, meaning it never needs to be charged; it draws its power from inside the display. I enjoyed the limited drawing I was able to do on the smooth matte display (I’m an amateur artist at best).
Acer says the ConceptD utilizes “improved psychoacoustics” to provide a better listening experience. You can swap between presets for music, voice, movies, and various types of games in the DTS:X Ultra app that comes preloaded if you have external speakers or headphones connected. If you’re using just the laptop, there are Music, Game, Movies, and Voice presets in ConceptD Palette. The dual front speakers themselves deliver not-great audio that’s quite lacking in the bass department.
The ConceptD 7 Ezel is… well, in a word, it’s awesome. But you don’t need me to tell you that you don’t need to spend $4,000 to get an awesome device. If you want a touchscreen convertible with stylus support and can live without quite this much processing power, devices like the Dell XPS 13 2-in-1 and the HP Spectre x360 15 are half the price of this device, more portable, and also have outstanding screens. The Spectre’s screen doesn’t literally fold over the keyboard, but it’ll work for many of the same use cases. And even for folks who want this particular form factor, the smaller ConceptD 3 Ezel will be a more practical purchase. The ConceptD 7 Ezel is for those who need serious power.
But man, is the ConceptD 7 Ezel a great device for content creators. As a professional reviewer, I’ve used more creator-focused laptops than most people on the planet — and I’ve never used anything like this. It’s a great idea, it’s powerful, it’s well-built, and it’s a lot of fun to use. I won’t recommend that you buy it — but if you do, please know that I’m very jealous of you.
Blizzard has been relatively quiet of late when it comes to Overwatch 2. The developer teased some new character designs in February at Blizzcon, but there hasn’t been much news about the core game. It looks like that’s about to change, though: on May 20th, Blizzard will hold a two-hour-long live stream focused on the player-versus-player elements of Overwatch’s sequel.
Blizzard says that the stream will feature “a first look at player-versus-player changes coming to Overwatch 2.” The stream will also include appearances by Overwatch 2 game director Aaron Keller — who recently took over for Jeff Kaplan — along with lead hero designer Geoff Goodman and associate art director Dion Rogers. “From new maps to major gameplay updates, we’re reinvigorating the core Overwatch experience,” Blizzard says. The stream will be broadcast on both Twitch and YouTube at 3PM ET.
Last November, Google — which until now has offered unlimited storage for “high quality” (read: compressed) photos — announced that “unlimited” is being changed to “up to 15GB on your Google account.” In other words, while photo and video storage currently does not count against your total of 15 free gigs on a Google account, it will as of this coming June 1st — along with your Gmail, Google Drive files, and other stored data. Once you hit that 15GB wall, you will have to buy into the Google One service to increase your storage capacity. (Unless you own a Pixel, in which case you still have no limits on “high quality” photos.)
If you’re a Google Photos user who finds all of this a bit irritating, you may be thinking of leaving. But first, it’s a good idea to check out your alternatives. Below are some of the main photo storage services available to you, along with their basic fees, so you can figure out whether you want to switch. (Note: We’ve only included services that are specifically geared toward photos, not more general storage services such as OneDrive or Dropbox.)
Google provides each of its accounts with 15GB of free storage. However, for the last few years, photos have been treated differently: under its “high quality” plan, Google stored an unlimited number of photos for free as long as you allow them to be compressed to 16 megapixels. (According to Google, photos that size can be printed without issue up to 24 x 16 inches.) Videos were kept to a maximum of 1080p. (Data such as closed captions could be eliminated to save space.) “Original quality” photos — those that were not compressed — were not part of this unlimited plan but were counted as regular files.
However, all of that is changing. As mentioned above, starting on June 1st, 2021, Google will be including photos in its storage calculations. Once you hit that 15GB ceiling, you will have to buy into the Google One service for additional storage space.
Google One currently starts at 100GB of storage for $1.99 a month ($19.99 a year) and proceeds to 200GB for $2.99 a month ($29.99 a year) or 2TB for $9.99 a month ($99.99 a year). The 2TB plan also comes with a VPN for Android phones.
Before you run to invest in Google One, be aware that there are several mitigating factors Google is offering its users. When the new plan goes into effect, that is when the clock starts; photos you uploaded before then won’t count toward your 15GB limit. Also, if you’re a Pixel owner, then you can continue to upload high-quality photos without affecting your 15GB limit. (Of course, Pixel owners used to get unlimited original quality for free, rather than having to upload their photos in “high quality.” But hey, it’s something.)
If you’re part of Apple’s ecosystem, then you have easy access to iCloud Photos, Apple’s equivalent to Google Photos. iCloud Photos is connected to the Phone app on your Mac or iOS device as a backup for your photos. You automatically get 5GB of storage space associated with your iCloud account; after that, it costs 99 cents per month for 50GB, $2.99 per month for 200GB, and $9.99 per month for 2TB. (This is for the US; other countries have different fees.) Windows users can also access iCloud Photos via an associated app; Android users will have to access it using a browser.
Flickr has a free plan as well, but it’s limited to 1,000 photos — within certain guidelines: photo files are limited to 200MB and video files to 1GB. For unlimited storage without ads, you pay either $6.99 a month or $59.99 annually (plus tax). Other advantages to a paid annual membership include stats about which of your photos are trending and a variety of discounts from several companies, including Adobe and SmugMug (which is now part of Flickr).
Speaking of SmugMug, this long-lasting service is also available, offering storage, portfolios, and sales opportunities for professionals. For $55 a year or $7 a month, you get unlimited uploads and a customizable website. The Power plan ($85 a year or $11 monthly) adds site customization and your own domain name. If you’re looking to be a professional photographer, the Portfolio plan adds e-commerce features for $200 a year or $27 a month (you keep 85 percent of the markup). And finally, the Pro plan lets you create events, price lists, and branded orders, among other features, for $360 a year or $42 a month. If you’re interested in trying it out, you can get a two-week trial.
Canadian company 500px is actually more for professional photographers than your average snap-and-save picture taker. It offers pros a place to store, exhibit, and license their work. So if you have ambitions to start peddling your photos, 500px may be worth checking out.
The site offers two paid plans. The first, modestly named Awesome, offers unlimited uploads, priority support, no ads, a history of “liked” photos, gallery slideshows, and a profile badge for $59.88 a year or $4.99 monthly. The Pro plan adds a way to display your services and organization tools for $119.88 a year or $9.99 monthly. (You get a discount on your first year: Awesome costs $47.88 a year or $3.99 monthly, while Pro goes for $71.88 a year or $5.99 monthly.) And if you want to make a bit of money, you can submit your photos to be licensed for stock usage through 500px.
There is a free ad-supported plan that gives you seven uploads a week. When you sign up, you can try out the Pro plan for two weeks before committing yourself.
Photobucket offers a limited free plan, allowing you to upload up to 250 photos for free — more a trial plan than anything else. If you like what you see, you can start with the Beginner plan at $5.99 per month or $64.68 annually, which gives you 25GB of storage, along with no ads, password-protected album sharing, and an image editor. For $7.99 per month or $86.28 annually, the Intermediate plan provides 250GB of storage and unlimited image hosting. Finally, for $12.99 per month or $140.28 annually, the Expert plan offers unlimited storage and no image compression, among other extras.
DeviantArt calls itself “the world’s largest art community” with a social network for visual artists of all kinds. It offers visitors a wide range of artist galleries to view, divided into categories such as traditional, animation, and illustrations. DeviantArt (or DA for short) even has its own publishing platform called Sta.sh — emphasizing the fact that this site, like 500px, is less for simple storage and more for showing (and selling) your art.
With a free membership in DeviantArt, there are no restrictions on how much you upload for public access, and you get admission to DA’s community of artists and art lovers. Core Members enjoy additional perks. For $3.95 a month or $39.95 a year, you get to sell your art with no service fee (but a 20 percent fee on Premium Gallery & Premium download sales) and a $100 max price per digital item, along with 20GB of private storage space in Sta.sh. For $7.95 a month or $79.95 a year, you can charge up to $1,000 per item and pay a 12 percent fee on Premium Gallery & Premium download sales, along with 30GB of private storage. Finally, $14.95 a month or $149.95 a year lets you charge up to $10,000 per item, lowers your fee to 10 percent fee per sale, and gets you 50GB of storage.
Amazon provides its Prime members with a grab bag of extras along with the free shipping. In addition to the video offerings, music streaming, and other goodies, you get unlimited photo storage for $119 a year.
A nice perk is that you can share that unlimited storage with five friends or family members in what is called the Family Vault. Everything there is accessible to everyone who shares the Vault. “Unlimited,” by the way, does not include videos or other files; for those, Prime members get 5GB of storage, and after that, there is a long list of storage plans available starting from $1.99 a month for 100GB.
That’s something to keep in mind if you drop your Prime membership. In that case, according to the Amazon instructions, “the unlimited photo storage benefits associated with the membership end. All uploaded photos count toward your Amazon Drive storage limit.” The total storage for non-Prime members (stills and video) is 5GB.
Update November 12th, 4:55PM ET: This article has been updated to include SmugMug and to explain that it only covers photo-specific services.
Update November 16th, 10:30AM ET: Updated to add iCloud Photos and to update the prices and screenshot for 500px.
Update May 11th, 2021, 1:10PM ET: Several prices and screenshots updated.
The new program is only rolling out in the US for now, and even if you are given the chance to sign up, it doesn’t mean that you’ll actually be selected to buy one of the highly in-demand consoles.
Today we’re introducing the Console Purchase Pilot, allowing US #XboxInsiders on Xbox One to register for a chance to reserve an Xbox Series X|S console. Check the Xbox Insider Hub on Xbox One for details. Limited space is available and not all who register will be selected. pic.twitter.com/MBkQmbSDWc
Customers will also have to use the Xbox Insider Hub app on an Xbox One console to both sign up for the Console Purchase Pilot and purchase the console itself — you won’t be able to conduct the transaction on a PC, web browser, Xbox 360, or Xbox Series X / S by design.
And with next-gen consoles expected to be hard to find for months to come — at its last update, Microsoft said to expect the new Xboxes to be tough to buy until at least June — testing out new ways to directly sell consoles to fans could be the best way to make sure that more units don’t end up on the eBay aftermarket.
Bose announced today that it will begin selling direct-to-consumer SoundControl hearing aids for adults with mild to moderate hearing loss on May 18th. They’ll cost $849.95 and will be sold directly by Bose in five states — Massachusetts, Montana, North Carolina, South Carolina, and Texas — before they’re available nationally.
The hearing aids are meant to be fit and controlled by the wearer without needing to see an audiologist for a hearing test and professional fitting. They use standard hearing aid batteries that Bose says will last up to four days if used 14 hours a day. Volume, treble and bass, and modes for different listening environments can be adjusted and preset in the Bose Hear app on iOS or Android. There’s a “Focus” feature with different settings, including a “Front” setting for filtering noise in busy rooms and an “Everywhere” setting that allows all sounds during walks outside or around the house.
“In the United States alone, approximately 48 million people suffer from some degree of hearing loss that interferes with their life. But the cost and complexity of treatment have become major barriers to getting help,” said Brian Maguire, category director of Bose Hear, in the press release.
Prescription hearing aids can cost several thousand dollars and are rarely fully covered by insurance. Less costly personal sound amplification products (PSAPs) are sold in stores, but they aren’t as effective or adjustable as hearing aids and aren’t regulated by the Food and Drug Administration. The SoundControl hearing aids are the first to be authorized by the FDA for use without assistance from a health care provider.
An older design for Bose hearing aids with “self-fitting technology” was cleared by the FDA through De Novo classification in 2018, meaning it’s a low-risk product, and there are no direct-to-consumer devices like it on the market. That design had a neckband and a rechargeable battery like Bose’s Hearphones, a hearing amplifier that was discontinued last year. The new design looks more like traditional hearing aids, with a receiver behind the ear and a small tip that sits in the ear canal, and was cleared by the FDA last week based on substantial equivalence to the 2018 De Novo authorization.
Samsung representatives won’t show up in person to Mobile World Congress in Barcelona this year, according to a statement obtained by Reuters. The company says it has “made the decision to withdraw from exhibiting in-person” and will instead attend remotely to prioritize the health and safety of its customers and employees. Samsung has yet to announce what its remote presence will look like.
We’ve reached out to several other large brands about their attendance plans, including ZTE, Huawei, Sony, and Lenovo. None were immediately available for comment, but we’ll update this article if we hear back.
The Retail, Wholesale and Department Store Union (RWDSU) is holding a hearing this week to contest the results of the Amazon union election. Organizers say the retail giant interrogated employees, spread anti-union propaganda, and held captive audience meetings.
These tactics might seem dirty, but most aren’t explicitly illegal. “According to the letter of the law, workers have the right to organize, but it doesn’t pan out that way in practice,” says Kelly Russo, an organizer with the Office and Professional Employees International Union (OPEIU) Local 2. “Labor laws are weak or have loopholes that allow employers to dissuade people from forming a union.”
The National Labor Relations Act (NLRA) of 1935 was meant to defend worker organizing. But the Taft–Hartley Act of 1947 poked holes in many of its protections. The legislation allowed states to pass right-to-work laws, forcing unions to represent workers who don’t pay dues, which weakens their position financially.
Now, Congress is considering a law that would dramatically reshape American labor. The Protecting the Right to Organize Act (also known as the PRO Act) would eliminate many of the roadblocks that workers face when they try to unionize. Among other things, it would ban companies from forcing employees to attend captive audience meetings where managers spread anti-union messages; allow unions to collect dues from non-union members — increasing their budgets and their ability to effectively organize; and establish penalties for companies that violate workers’ rights.
“The PRO Act is the most significant piece of labor legislation since the New Deal,” says Veena Dubal, a law professor at the University of California, Hastings. “It really addresses the ways that workers’ rights to organize have been eroded, which is why we see these huge misinformation campaigns and well-funded propaganda.”
The bill would have massive implications for tech companies that have faced few consequences for union busting. After the Bessemer election, RWDSU accused Amazon of firing an organizer for handing out union cards. This is illegal under current labor law. But even if it were proved true, the punishment for Amazon would be minimal. “There is no repercussion, essentially,” Dubal says. “A worker would have to file an Unfair Labor Practice charge, then they wait for months and months to find out if they were illegally fired. And then the only power that the state has to enforce this is to say, ‘Look, Amazon, you have to reinstate the worker and you have to put a sign up in the break room that says that you broke the law.’”
The PRO Act could also impact the Alphabet Workers Union, which launched in early January. The union is open to contractors as well as white collar workers — making NLRB recognition nearly impossible. Under the PRO Act, that might change. The law reshapes the test for determining who is an independent contractor. “It would make it much easier for a lot of people who are misclassified right now as independent contractors to organize,” Dubal says.
It’s for that reason that Uber, Lyft, DoorDash, and Instacart have spent at least $1,190,000 on lobbying efforts to influence the bill, according to reporting in The Intercept. Uber has said that the possibility of drivers unionizing could pose a business risk, writing in its 2020 financial filing: “If a significant number of Drivers were to become unionized and collective bargaining agreement terms were to deviate significantly from our business model, our business, financial condition, operating results and cash flows could be materially adversely affected.”
So far, only 47 senators have signed on in support of the measure, leaving the bill far short of the 60 “yes” votes necessary for it to overcome a filibuster. Three Democrats, including Kelly, Sen. Kyrsten Sinema (D-AZ), and Sen. Mark Warner (D-VA), have stayed relatively silent on the sweeping labor legislation, and unions have promised not to support the lawmakers in upcoming elections if they fail to back the bill, according to Politico. Labor activists have gone as far as showing up at Warner’s home on Wednesday with a cake to lobby for his support for the bill, TheWashington Post reported.
“‘Send me the PRO Act.’ Support the President,” the cake read.
With Republicans largely opposed to the legislation, the PRO Act needs every ounce of support possible from Democrats and Independents in order to send it to the president’s desk. Without 60 lawmakers in favor of the legislation, one senator can prevent the bill from being brought for a vote by prolonging debate on it. “It’s the legacy of slave power. I have no other way to describe the filibuster than that. We need an abolition of this anti-democratic nonsense that prevents real progress,” says Tom Smith, organizing director of the Communications Workers of America.
In a statement to The Verge, Warner said, “I am concerned about the portion of the bill that deals with worker classification. Many Americans prefer alternative work arrangements that give them the freedom and flexibility to set their own hours or work around their other interests.”
A spokesperson for Kelly told The Verge that he was “evaluating the legislation and speaking about it with stakeholders in Arizona as he focuses on building an economic recovery that benefits working Arizonans who have been hit hard by the pandemic.”
Sinema’s office did not respond to a request for comment.
The lack of support from Senate Democrats has been a blow to unions that are hoping the PRO Act will help them regain some of their power. “I think the question for us is ‘are we going to take advantage of this moment or not?’” Smith says. “Are we going to make this another one of those moments in history when tens of millions of workers are able to successfully organize unions?”
President Joe Biden has repeatedly touted his support for the PRO Act. In March, he released a statement encouraging the passage of the bill: “As America works to recover from the devastating challenges of deadly pandemic, an economic crisis, and reckoning on race that reveals deep disparities, we need to summon a new wave of worker power to create an economy that works for everyone.”
That call was brought up again last month during Biden’s first joint address to Congress. “The middle class built this country, and unions built the middle class.”
Organizers hope they’ll get a chance to do so again, noting that the current labor landscape is heavily weighted toward corporations. “The hell scape that workers face when they try to exercise their fundamental human rights, and their freedom of association — it’s just so stacked against them right now,” Smith says.
Project Connected Home over IP (CHIP) — the ambitious smart home partnership that will see Apple, Apple, Google, Samsung, the Zigbee Alliance, and dozens of other companies work together on an open standard — has gotten a new name: Matter.
The rebranding comes ahead of the first Matter certifications, which are set to arrive before the end of 2021. The new branding and logo are designed to help make it easier for customers to tell which devices work with Matter’s unified system, with the logo set to appear on future hardware products.
The goal of Matter is deceptively simple: make sure you’re able to use your smart home devices with the voice assistant (or assistants) of choice, whether that’s Apple’s Siri / HomeKit, Amazon Alexa, or Google Assistant. At launch, Matter will run on Ethernet Wi-Fi, Thread, and Bluetooth Low Energy.
Other big companies, like Philips Hue, are on board: the company has already promised to release a simple software update in the coming months that will make all of its past and present products compatible with Matter once it launches.
It’s an ambitious goal, which could vastly simplify the confusing parts of smart home setup — assuming companies are willing to put in the work to issue software updates and integrate the standard into their current and future products.
As part of the announcement, the Zigbee Alliance (which created the Zigbee standard for interconnected smart home gadgets) has announced that it’ll rename itself to the Connectivity Standards Alliance (CSA) as it expands to focus more on projects like Matter in addition to the existing Zigbee network.
TikTok’s next move to compete with Facebook might be to add an in-app shopping feature, according to a new report from Bloomberg. The publication writes that TikTok is testing in-app sales in Europe by partnering with several brands, including UK-based streetwear company Hype.
TikTok’s made some shopping moves in the past, like giving creators the ability to sell merchandise through an integration with Teespring, partnering with Shopify, and reportedly working on some kind of live-video informercial product. This new prototype sounds more like how shopping’s been integrated on Instagram, with a separate shopping tab under a brand’s account that lists products with images and prices, Bloomberg writes.
The Hype account page currently does show what looks like a shopping section (though it’s blank for my US account) and the company did confirm to Bloomberg that it was participating in the test. We’ve also reached out to TikTok for further confirmation the shopping test is happening.
Shopping and TikTok seem like they could have a real peanut butter and jelly type of relationship. The bite-sized length and “stickiness” of TikTok videos seem perfectly suited for advertising, while the passive watching that TikTok encourages (at least in me) makes it easy to consume a lot of content. So far, this shopping prototype doesn’t sound quite as video-focused as whatever informercial-style feature TikTok was previously considering, but I wouldn’t be too surprised to see links to the hypothetical shopping tab littered throughout a brand or creator’s videos at some point in the future.
It’s also more or less exactly what Facebook’s on its way to doing with Instagram, its TikTok competitor Reels, and the normal Facebook app itself. The company went on a slightly different kind of shopping spree in 2020, adding commerce functionality like the previously mentioned shopping tab, product information in Reels, and it hasn’t stopped there. Facebook is also testing sticker ads in Stories.
Wherever TikTok lands with shopping, tests like these seem to show the viral video app is ready to take advantage of its status as a household name and grow — whether it’s shopping or spreading TikTok features across other apps.
Subaru has released a pair of teaser images of its first electric car, which will be called the Solterra EV. Coming to the US, Canada, Europe, and Japan in 2022, it will be powered by the electric vehicle platform that Subaru has been co-developing with fellow Japanese automaker Toyota.
In true teaser fashion, the images don’t show off too much. One, which appears to be more of a rendering than an actual photograph, reveals that the EV will… roughly look similar to Subaru’s other SUVs, though it seems to be on the smaller side. The other is a close-up of the rear badge, with a subtle splash of mud as a nod toward Subaru’s outdoorsy bona fides.
That’s basically all Subaru is saying for now, though. No pricing, no specs, and no information about whether Subaru will use the Solterra as a chance to refresh the way it designs its vehicle interiors (as many other automakers have with their first electric vehicles). Pretty much the only other detail Subaru shared is that the name “was created using the Latin words for ‘Sun’ and ‘Earth’ to represent Subaru’s commitment to deliver traditional SUV capabilities in an environmentally responsible package” — which, as far as corporatenamingconventions go, is kind of refreshingly harmonic, even for Subaru.
Solterra is certainly more pleasing to the eyes than “BZ4X,” which is the name of the first SUV Toyota will build on this shared platform with Subaru. The BZ4X is also due out in 2022 and will be built on this shared platform, which Toyota calls the e-TNGA and Subaru calls e-Subaru (which is not so harmonic). The companies have said that the vehicles built on this platform will benefit from Subaru’s experience with making really good all-wheel drive systems and Toyota’s years of developing battery tech for its hybrids.
As thin as the press release is, being able to talk about Subaru’s first EV never felt like a total given. Along with Toyota, Subaru has avoided making a splashy or expensive transition to electric vehicles during a time when most every other automaker took the leap. The company’s slow approach has, at best, seemed like a sober read of the current market and, at worst, seemed out of touch — like when it advertised its all-wheel drive technology as a “great opportunity to cope with recent climate change” in a since-deleted press release.
The Xbox Series X / S’s handy Quick Resume feature, which suspends supported games so that they boot up more quickly when you come back to them later, is getting some nice improvements in the May Xbox update.
Once the update is installed on your console, you’ll be able to see which games are kept in Quick Resume with a new tag, and you’ll also be able to make a group of games in Quick Resume so you have access to all of them in a single spot. Microsoft is also promising that Quick Resume will have “improved reliability and faster load times,” Microsoft’s Jonathan Hildebrandt said in a blog post.
The new update also adds passthrough audio for media apps like Disney Plus and Apple TV, meaning that audio from those apps can be sent directly to a compatible HDMI device. There are also new parental settings that let parents unblock multiplayer mode for individual games and a new dynamic background.
The May update should be rolling out now, and it might already be available for you. While writing this story, I booted up my Series X and it installed.
Microsoft also announced that it will be sunsetting the Xbox One SmartGlass app for PC starting in June. “This means the SmartGlass app will be removed from the Windows Store and there will be no further updates for those who have the app already downloaded to their devices,” Microsoft said.
YouTube plans to pay $100 million to creators who use YouTube Shorts, its TikTok competitor, throughout the next year. The goal is to encourage creators to pick up and continually post to its new service, which doesn’t otherwise give creators a built-in way to make money.
Exactly how much creators can earn is still up in the air. YouTube says that it’ll reach out to creators on a monthly basis, looking for people with the most engagement and views. “Thousands” of creators could get paid each month, YouTube says, and basically anyone who posts to Shorts is eligible. The one caveat is that their videos have to be original content, and, of course, abide by YouTube’s community guidelines.
YouTube started launching Shorts in the US in March. The short videos appear in YouTube’s mobile app and, just like TikTok (or Instagram Reels or Snapchat Spotlight), you can swipe from one to the next in an endless full-screen feed.
Other companies have taken the same approach to encouraging creators to stick with their platform. TikTok launched a $200 million creators fund in July 2020, and Snapchat paid out $1 million per day for a period of time after its TikTok competitor, Spotlight, launched in November 2020.
Payments will be available in the US and India — the two regions Shorts has launched — to start, but YouTube plans to expand its availability as it rolls out the service to more regions. There’s no specific date yet for when YouTube will start offering payments. YouTube says the fund will last from its start this year through some point in 2022.
HTC is revamping its Vive Focus standalone virtual reality headset, adding a 5K screen, a 120-degree field of view, and a swappable battery. The Vive Focus 3 follows HTC’s Vive Focus Plus. Like its predecessor, it’s a self-contained product that uses inside-out tracking instead of external sensors, designed for business customers. The headset will be released worldwide on June 27th for $1,300 — a price that includes tech support and a suite of business services.
The Vive Focus 3’s biggest specs upgrade is its display. The headset uses one 2448 × 2448 panel for each eye with a 90Hz refresh rate, compared to 1440 x 1600 pixels per eye with a 75Hz refresh rate for the 2019 Vive Focus Plus. (For the record, there’s no Vive Focus 2 — HTC seems to be retroactively giving the Focus Plus that status.) That’s a leap over the 2160 x 2160 pixels per eye you’d find with the HP Reverb G2, a more significant bump over the 1832 x 1920 pixels per eye of the Oculus Quest 2, and a vast improvement over first-generation VR headsets from a few years ago.
The Focus 3’s 120-degree field of view also improves on the Focus Plus’ 110 degrees. Where the Focus Plus had a built-in battery, the Focus 3 has a swappable battery that’s mounted on the back of the headset for better balance. HTC promises two hours of battery life and a quick-charge system that will restore 50 percent of the battery with a 30-minute charge. And it uses Qualcomm’s Snapdragon XR2 chip — based on Qualcomm’s Snapdragon 865 — in place of the Focus Plus’ Snapdragon 835.
HTC also says the Vive Focus 3 will be more comfortable than its predecessor, thanks in part to features like the back-mounted battery, as well as a reworked strap design. Both systems use built-in speakers. But HTC touts a new and improved audio system, including an “audio privacy” setting that reduces the amount of sound people around you can hear — a running issue with VR headset speakers.
The Vive Focus 3 has a lot in common with Facebook’s consumer-focused Oculus Quest 2, which was released last year. The Quest 2 is also a standalone, XR2-powered VR headset. And while HTC’s first controllers were trackpad-based remotes, it’s moved toward a system that basically mimics the Oculus Touch controller. The Vive Focus 3 design features a Touch-style analog stick, two triggers, two lettered face buttons, and one menu or home button on each controller. Hand tracking — which the Quest 2 also supports — is supposed to arrive in a future update.
The Vive Focus Plus had a streaming system for wirelessly playing PC games on a headset, similar to the Oculus Quest 2’s Air Link, and the Vive Focus 3 includes Vive Business Streaming, a cable-based system for connecting it to a PC. (Wireless streaming will be added later, HTC says.)
HTC Vive general manager Dan O’Brien describes the Vive Focus 3 as higher-end than the Quest 2, not only in specific features like screen resolution and a replaceable battery, but in overall ergonomics and performance. It’s one of only a few headsets that directly compete with the Quest 2’s all-in-one design. And unlike Facebook, HTC doesn’t require a social media-based sign-in — arguably the worst feature of the Oculus Quest 2, and a factor some VR fans have called a dealbreaker.
But HTC is steering consumers away from the headset. The company is still maintaining a consumer-focused games store called Viveport, and O’Brien says HTC expects some gamers to buy its newly announced Vive Pro 2. But the Vive Focus 3 was designed with feedback from carmakers, professionals running simulation training, medical companies, and other enterprise customers. Instead of including consumer media or fitness apps, HTC provides services like a centralized device management system or six months of free Vive Sync, a VR meeting and collaboration tool that launched last year.
O’Brien says that selling business hardware is simply more sustainable right now than selling to consumers — largely because of price expectations. “The consumer market has gravitated toward these artificially subsidized price points that really only one company in the world has any tolerance for,” he says. That company is Facebook, which sells the Oculus Quest 2 at a starting price of $299 — less than a quarter of the Vive Focus 3’s cost.
“If we wanted to take our products and try to compete in that space, we would have to make the active decision as a for-profit company to lose money for the foreseeable future,” then make it up through software sales or a system like Facebook’s advertising model, O’Brien says. “That’s a very different business model and market, whereas the enterprise and professional market is a very healthy and rapidly growing market where we can bring real value and solutions.” Competitors like Varjo have taken a similar tack, experimenting with innovative display systems and augmented reality features but aiming their efforts solely at businesses.
By contrast, HTC’s consumer-focused Vive Cosmos “has not performed as well as we would have liked,” although the company is still looking at ways to advance the brand. “I think the consumer story is still yet to be told from HTC, but that’s something to be told in the future,” says O’Brien. The Vive Focus 3 isn’t part of that story — at least for now.
HTC has unveiled the new Vive Pro 2, an update to its high-end virtual reality headset that adds a host of small but worthwhile improvements.
The biggest change is the new display, which now offers 5K resolution (or 2448 × 2448 pixels per eye), 120Hz refresh rate, and a 120-degree field of view. That’s a decent step up from the original Vive Pro, released in 2018, which had a 2880 x 1600 resolution, 90Hz refresh rate, and 110-degree field of view. The new Vive Pro 2 also supports Display Stream Compression, or DSC, a first in a VR headset. DSC is a visually lossless standard most frequently seen in high-end monitors.
As a result of all this, HTC says the Vive Pro 2 delivers “minimal motion blur.” The company also claims that the meshed lines noticeable in many older VR headsets, commonly known as the screen door effect, have been “virtually eliminated” — although these have vastly improved in other newer devices like the Oculus Quest 2 and HP Reverb G2, too.
Display aside, the Vive Pro 2 has the usual ergonomic features of high-end headsets, including adjustable straps, quick-adjust sizing dials, and adjustable interpupillary distance (IPD). The integrated headphones are Hi-Res Audio Certified with 3D spatial sound, and the headset also works with third-party headphones. The new Vive Pro 2 will also work with all Vive SteamVR accessories, including Vive trackers, the new Vive Facial Tracker, and any SteamVR controllers and other accessories.
Design-wise, the renders make the new Vive Pro 2 look a little sleeker and more compact than its predecessor. Though we’ve not yet received weight or dimensions of the new headset nor seen it in person to actually compare.
If you’re upgrading from the original Vive Pro, HTC is running a special promotion during the preorder period, selling the headset on its own (without controllers or external trackers) for $749 / £659 / €739. Once that promotion ends, the price for the standalone headset will be $799 / £719 / €799. A fully kitted-out Pro 2, including Base Station 2.0 and controllers, will be available to buy from June 4th for $1,399 / £1,299 / €1,399.
Solar and wind energy growth soared in 2020 and is on course to keep catapulting upward. Last year, renewable sources of electricity grew faster than they have since 1999. That rapid rise is far from a one-off event, according to the International Energy Agency (IEA), which said today that the “exceptionally high” growth in 2020 is the “new normal.”
It’s yet another signal that renewable energy is elbowing out competition from fossil fuels, at least when it comes to electricity. New renewable energy capacity — primarily solar and wind — made up a whopping 90 percent of the power sector’s growth globally last year, according to the IEA, an intergovernmental organization that was founded to monitor the world’s oil supply but now also tracks renewable energy. The agency forecasts renewables to again account for 90 percent of the power sector’s expansion in 2021 and 2022.
That transition to renewable energy for electricity falls in line with many countries’ goals on climate change. President Joe Biden, for example, aims to get the US power sector running completely on clean energy by 2035. Electrifying buildings and transportation so that they can use solar and wind instead of oil and gas is one way governments and the private sector have moved to slash greenhouse gas emissions.
“Wind and solar power are giving us more reasons to be optimistic about our climate goals as they break record after record,” IEA executive director Fatih Birol said in a press statement today.
Renewable electricity capacity grew by 280 gigawatts last year. The uptick amounts to a 45 percent rise in renewables last year compared to the year before. The IEA expects another 270 to 280 GW to come online this year and again in 2022.
Governments and companies purchased renewable energy at “record-breaking” rates last year, according to the IEA, and their appetite is still growing.Amazon proclaimed itself the largest corporate buyer of renewable energy in the world in December 2020, beating out former record-holder Google, and it now has 8.5 GW of renewable energy capacity globally. The IEA’s estimates for global renewable electricity growth over the next couple of years are now more than 25 percent higher than previous estimates it made just six months ago.
Epic Games has been trying to convince Sony, Microsoft, and other publishers to bring their titles to the Epic Games Store and is willing to spend millions to do so.
A 222-page confidential document, filed as part of the Epic v. Apple trial, reveals a broad effort to compete with Steam during 2020 with free games, Fortnite promotions, and more. The document was originally created in September 2020 and was published and then quickly deleted last week.
The documents also show that Epic offered Sony $200 million for at least four first-party PlayStation games last year, ahead of a bigger push by Sony to bring more of its PlayStation games to PC. PC Gamer first reported on parts of this document last week, after a ResetEra forum thread detailed one of the slides. We have confirmed the document is real, and it contains lots of details around Epic’s efforts to improve its game store and compete with Steam.
It looks like the Sony deal fell through, as Horizon Zero Dawn appeared on Steam last year, with Days Gone arriving next week. Epic Games has also been trying to convince Microsoft to bring its first-party games to the Epic Games Store. “Their PC Game Pass leader is against what we’re doing,” notes the document, and Microsoft is “effectively bidding against us for content.”
Microsoft first started distributing more of its games to Steam in 2019, but this Epic document appears to suggest that the Xbox maker has been talking to Valve about Xbox Game Pass on Steam. Rumors have suggested Valve is keen to get Game Pass on Steam, but nothing has been announced so far. Epic’s document notes that Xbox chief Phil Spencer “is meeting with Gabe at Valve occasionally,” in the same section that discusses Microsoft’s Xbox Game Pass efforts.
Epic has also been meeting with Riot Games, Activision / Blizzard, and EA in a bid to get more PC game content for its store. “League of Legends is a longshot,” admits Epic in the document, while also noting the deal to launch Tony Hawk’s Pro Skater 1 + 2 on the Epic Games Store could lead to more in the future.
Elsewhere in the document, it’s also revealed that Microsoft has apparently been requiring indie devs to agree to xCloud game streaming support to publish on Xbox. “Microsoft is using harsh language around the requirement,” claims Epic, noting that it’s “sign or be removed from Xbox.” We’ve reached out to Microsoft to comment on these claims. Other documents in the Epic v. Apple trial also detailed that Microsoft was proposing to reduce the cut it takes on PC game sales in exchange for xCloud streaming rights. That cut to 12 percent has been announced, but Microsoft has not revealed whether developers have to agree to grant streaming rights to the company.
The document shows how eager Epic was to attract top-selling games from Steam into the Epic Games Store. Epic identified an approximate $500 million revenue opportunity if all titles from competing platforms were available on its store. But out of the top 100 selling games on Steam in 2019, only 20 were available on the Epic Games Store.
“These existing Platinum, Gold, Silver, and Bronze titles not being on Epic Games Store is a glaring hole in our PC catalog,” admits Epic in the document. “It will be hard to move market share so long as they are not also at Epic Games Store.”
Epic obviously faces the challenge of convincing publishers and developers to bring games to its PC store, but free games have been the biggest draw. Other documents in Epic v. Apple revealed earlier this month that Epic spent at least $11.6 million on free games, and gained 5 million new users in return.
Epic has also been leveraging the popularity of Fortnite to bring more people to its store, with free cosmetics and campaigns to launch free games and raise awareness of the Epic Games Store. Rocket League also boosted the Epic Games Store numbers when it went free-to-play in September, with Epic employees celebrating bigger numbers on its own store than through Steam for launch day.
While Epic revealed in January 2020 that it has 100 million Epic Games Store users, its internal documents highlight how its monthly active user count changes when there are free games. In September 2019, the monthly active user count (excluding Fortnite) jumped to 10.2 million and then dropped to around 8 million until February 2020, when it jumped back up to 10.14 million when Farming Simulator 19 and Assassin’s Creed Syndicate were both free in the store. Epic made Grand Theft Auto V free in May 2020, and this caused the active user count to jump to a massive 45.4 million before dropping back down to 29.3 million the following month.
Intel is adding new processors to its 11th Gen Core H-series lineup today, and over half a dozen laptop manufacturers are announcing new machines that make use of them. In total, there are 10 new Tiger Lake-H processors being announced today, including five consumer processors and five commercial processors, with between six and eight cores. Here’s our full writeup on the chips themselves.
According to Intel, its new H-series processors will be used in over 30 upcoming ultraportables (aka: laptops that are 20mm thick or less) and upward of 80 workstations. Companies including Razer, HP, Asus, Lenovo, MSI, Acer, Gigabyte, and Dell are announcing their first laptops with the new chips today, and we’ve rounded up their models below.
Razer has announced a range of new Blade 15 Advanced laptops featuring Intel’s 11th Gen H-series processors. At the top of the lineup is a model with a Core i9-11900H paired with an RTX 3080 GPU with 16GB of video memory and a 4K 60Hz OLED touchscreen. But if you’re looking for something a little less powerful, you can get a machine that’s just 15.8mm thick, and Razer claims it’s the smallest 15-inch gaming laptop with RTX graphics. This thinner model is a step down specs-wise: it has a Core i7-11800H, an RTX 3060 GPU, 16GB of RAM, and a QHD 240Hz IPS display.
Razer’s laptops will be available to preorder from May 17th and will ship in June. Prices start at $2,299. Read more about Razer’s new laptops here.
HP has three new laptops it’s announcing today: the ZBook Fury G8, the ZBook Power G8, and the ZBook Studio G8. The Studio G8 can be configured with up to an Intel Core i9-11950H vPro processor, alongside an Nvidia RTX 3080 GPU with up to 16GB of video memory (there’s also the option of equipping it with a more creative-focused Nvidia RTX A5000 GPU). Available display options for the ZBook Studio G8 include 1080p IPS, 4K 120Hz IPS, or 4K OLED.
HP’s ZBook Studio G8 will be available from July at a price that’s yet to be announced. Meanwhile, the Power G8 and Fury G8 will launch at some point this summer. Read more about HP’s new laptops here.
Asus has new Zephyrus laptops to bring to the table today. First is the Zephyrus M16, which will sit above its more mainstream G-series laptops like the Zephyrus G14 and Zephyrus G15. Asus says the M16 will be configurable with up to an Nvidia RTX 3070 GPU, alongside Intel’s H-series chips. In terms of its display, the Zephyrus M16 has a tall 16:10 aspect ratio, QHD resolution, and 165Hz refresh rate. The company is also announcing the Zephyrus S17, a premium gaming laptop, which is available with up to an Intel Core i9-11900H, 48GB of RAM, and an Nvidia RTX 3080 with 16GB of VRAM.
Pricing and release information for the Zephyrus M16 is yet to be announced. The Zephyrus S17 will be available at some point in Q2 in North America. Read more about Asus’ new laptops here.
While we’re on the topic of 16:10 displays, Lenovo’s new Legion 7i and 5i Pro gaming laptops also use the aspect ratio for their 16-inch screens, paired with a 165Hz refresh rate. Specs for the 7i range up to the flagship Intel Core i9-11980HK, which can be paired with up to an Nvidia RTX 3080 GPU with 16GB of video memory. Step down to the Lenovo 5i Pro and your most powerful options drop to the Core i7-11800H, with an Nvidia RTX 3070. On the lower end, Lenovo also has models featuring Nvidia’s new RTX 3050 and 3050 Ti GPUs.
The Legion 7i and 5i Pro will both release in June starting at $1,769.99 and $1,329.99, respectively. Meanwhile, the 5i will release later in July with a starting price of $969.99. Read more about Lenovo’s new laptops here.
MSI is announcing a number of new gaming and creator-focused laptops today, ranging from two Creator Z16 models (which are aimed at the kinds of customers that would otherwise have bought a MacBook Pro), down to its more gaming-focused “Katana” and “Sword” machines.
The Creator Z16 has a 120Hz 16:10 QHD+ touch display and is available with an Nvidia GeForce 3060, and either a Core i7-11800H or a Core i9-11900H. Stepping down to the Creator M16 still gets you a QHD+ display, but its internal specs top out at Nvidia’s RTX 3050 Ti and Intel’s Core i7. There’s also a new Creator 17 using the new chips, which is available with up to a Core i9 and RTX 3080, and comes complete with a Mini LED display.
On the gaming side, MSI has also bumped over a half dozen laptops up to the new processors, including the GE76, GE66 Raider, GS76 Stealth, GS66 Stealth, GP76 Leopard, GP66 Leopard, GL76 Pulse, and GL66 Pulse. Finally, there’s the new “Katana” and “Sword” laptops. These are available with up to Core i7-11800H CPUs and include versions with Nvidia RTX 3060, RTX 3050 Ti, and RTX 3050 GPUs.
MSI’s Creator Z16 starts at $2,599, its Katana models start at $999, Sword will start at $1,099, and pricing for the Creator M16 is yet to be announced. The laptops are due to release later this month on May 16th. Read more about MSI’s new laptops here.
Dell / Alienware
Not to be left out of the action, Dell has a collection of new laptops it’s announcing based on Intel’s latest-generation H-series processors, with some targeting consumers and gamers, and others aimed at business users. There are Dell-branded models, as well as laptops from its Alienware subsidiary.
First up is the Alienware M15 R6. It’s available with up to a Core i9 11900H, 32GB of RAM, and an Nvidia RTX 3080 with 8GB of video memory. It’s got a 15.6-inch display, and there are options for a 1080p 165Hz display, 1080p 360Hz, or QHD 240Hz. Dell is also teasing the Alienware X17 in a series of images, as well as the teaser trailer embedded above. Details on this laptop are currently slim, but the company says it’ll eventually be available with 11th Gen Intel Core processors and 30-series GPUs from Nvidia.
Dell is also announcing a new G15 laptop today. The laptop will be available with up to an Intel 11th Gen six-core Core i7 CPU, Nvidia 30-series GPUs, and a choice of 120Hz or 165Hz refresh rates for its 15.6-inch 1080p display.
Away from its gaming machines, Dell is also announcing revamped XPS 15 and XPS 17 laptops today. They’ll be available with Intel’s latest processors, Nvidia RTX graphics, and there’s also a new OLED screen version of the XPS 15. Finally, Dell is also releasing updated models across its business-focused Precision and Latitude lineups.
The Alienware M15 R6 will start at $1,299.99, the Dell G15 at $949.99, the XPS 15 at $1,199.99, and the XPS 17 at $1,399.99. All are available from today. Expect more information on the X17 in the months ahead.
Gigabyte is also announcing new laptops across its Aero, Aorus, and G series lineups.
First up from Gigabyte are new Aero series laptops aimed at creators. There’s the Aero 15 OLED, which is available with up to an Intel Core i9-11980HK, RTX 3080, and 4K HDR OLED display. Meanwhile, the Aero 17 HDR is available up to the same specs, but it’s got a larger 17.3-inch display (up from 15.6-inch with the Aero 15) which is IPS rather than OLED.
Meanwhile over on the gaming side, there’s the Aorus 15P, Aorus 17G, and Aorus 17X. The 15P and 17G are available with Intel Core i7-11800H processors and up to an Nvidia RTX 3080 with 16GB of video memory. The Aorus 15P has a 15.6-inch 1080p IPS display that’s available with either 240Hz or 360Hz refresh rates, while the Aorus 17G has a 17.3-inch IPS display with a refresh rate of 300Hz. The Aorus 17X also has a 17.3-inch 300Hz IPS display and is available with up to an RTX 3080, but it features a more powerful Intel Core i9-11980HK processor.
Finally, there are Gigabyte’s 15.6-inch G5 MD and G5 GD, and its 17.3-inch G7 MD, and G7 GD laptops. Resolution and refresh rate is 1080p and 144Hz across the board. The G5 MD and G5 GD have Intel Core i5-11400H processors, the G7 MD has an i7-11800H, and the G7 GD has an i5-11400H. The laptops are equipped with Nvidia’s new RTX 3050 and 3050 Ti GPUs.
The Aero 15 OLED starts at $1,799, and the Aero 17 HDR starts at $2,499, and both are officially on sale today. The Aorus 15P starts at $1,599, and the 17G starts at $2,099 (pricing for the 17X was not available at time of publication), and they’re also available starting today. Preorders for the new G5 and G7 models also open today, with the G5 starting at $1,149.
Acer has three new laptops it’s announcing today: the Predator Triton 300, Predator Helios 300, and the Nitro 5. All three are spec bumps of existing models.
The company says its Triton 300 will be available with up to a 4.6GHz Intel 11th Gen H-series processor, an Nvidia RTX 3080 GPU, and 32GB of RAM. Available displays include a 165Hz QHD screen, or a 360Hz 1080p panel.
Next up is the Helios 300. It’s also available with Intel’s latest processors paired with 32GB of RAM, but it maxes out at an Nvidia RTX 3070 GPU. Like the Triton 300, it’s also available with a 360Hz 1080p or a 165Hz QHD display. Similarly, the Nitro 5 is also available with Intel’s latest-generation chips, an RTX 3070 GPU, and 32GB of RAM. Acer says the Nitro 5 is available with 15.6 or 17.3-inch QHD IPS displays with 165Hz refresh rates.
The Predator Triton 300 will be available in North America from July starting at $1,699, while the Nitro 5 will be available from June starting at $999. Pricing and availability for the Predator Helios 300 was not available at time of publication.
Aaron Fisher first came face to face with the US’s inconvenient and broken electric vehicle charging infrastructure two years ago. Driving a borrowed BMW i3 from New York City to Hartford, Connecticut, for his grandmother’s 90th birthday party, he assumed the journey would be a short, three-hour jaunt. Instead, it lasted a grueling seven hours.
“I had charger issues, I had payment issues, I had customer service issues, I had routing issues, because of the fragmentation around electric vehicle charging,” Fisher said. As for his grandmother’s birthday? “I missed her dinner,” he said ruefully. “She was not mad, but very disappointed.”
There are approximately 41,000 public charging stations in the United States, with more than 100,000 outlets. But finding one that actually works or isn’t locked inside a gated parking garage can be a bit of a scavenger hunt. The charging experience in the US is almost comically fragmented, especially for non-Tesla owners. While Tesla’s Supercharger network has been praised for its seamless user experience and fast charging ability, the opposite appears to be true for pretty much everyone else.
Fisher, a former management consultant who also briefly interned in the Obama White House, was so frustrated by his experience trying to drive an EV in the US that he founded his own company, EVPassport, based on the principle that charging your electric vehicle should not require signing up for a dozen different smartphone apps.
Biden will have his work cut out for him. Despite rapid growth in sales over the past few years, EVs are still a niche product, making up just 2 percent of the new car market and 1 percent of all cars, SUVs, vans, and pickup trucks on the road. That said, sales are expected to pick up in the next few years, depending on new incentives and point-of-sale rebates that are currently being debated. But when they do, there are real questions about whether the nation’s disjointed, low-tech, outdated charging system will be up to the task.
“There’s no APIs in the charging marketplace,” Fisher said, referring to the software intermediary that allows two apps to communicate. “It’s kind of like banking in the ’90s.”
The Charging Problem
On paper, the EV charging network sounds like it’s doing pretty good.
A closer look reveals how incredibly uneven it is. One-third of those stations are located in one state: California, with a whopping 22,620 stations, according to a recent study by Pew Trust. Other states have few, bordering on none. North Dakota has 36 public chargers, Alaska just 26.
But don’t mistake California as some sort of bastion of enlightened EV ownership. A recent study published in Nature Energyby a research team from the University of California, Davis found that about 1 in 5 EV owners — 20 percent of plug-in hybrid vehicle owners and 18 percent of pure battery-electric vehicle owners — eventually switched back to gas-powered vehicles. The top reason cited was “dissatisfaction with the convenience of charging.”
This shouldn’t come as a surprise. “Range anxiety,” or the fear of running out of power before finding the next charging station, has long been cited as a major barrier to mass EV adoption. But while range has steadily improved over the years, with many EVs now able to travel 300 miles or more on a single charge, the anxiety has shifted to the inadequacies of the charging network.
Chargers can be hard to find, and the act of charging takes much longer than refueling a gas car. The research team at UC Davis noted this in their study. “The way in which a [plug-in electric vehicle] is charged has not changed, whereas vehicle range has been increasing since [plug-in hybrid electric vehicles] and [battery-electric vehicles] were introduced,” they conclude. Electric vehicle owners “have the option to purchase longer-range vehicles, whereas they cannot yet purchase a vehicle that is charged differently.”
EV boosters say that much of the problem stems from a faulty frame of reference. We see how many gas stations are available — around 150,000 by some estimates — and believe that the same should be true for chargers if EVs are ever to replace their gas-powered counterparts.
This ignores the fact that most EV owners do their charging overnight while parked in their driveway at home. But if EVs are to become a more attractive option to car buyers, charging stations are going to need to become more pervasive like gas stations. People need to see the expansion with their own eyes in order to overcome the psychological hurdle that prevents them from imagining an electric future.
There are educational obstacles, too. The way car companies communicate the amount of time it takes to charge an EV completely misses the point, said Chris Nelder, manager of Carbon-Free Mobility at the Rocky Mountain Institute. Automakers avoid talking about kilowatts and kilowatt-hours, instead relying on the nonsense metric of miles-per-minute of charging.
“We all understand this when it comes to gasoline,” Nelder said. “We all know what a gallon is. We all know how many gallons of gas our tanks can hold. We all know how many miles per gallon our vehicles get. But for some reason everybody operating in this space is afraid to explain to a customer what the equivalent is on the electric side.”
This is one of the educational hurdles that the Biden administration will confront in its quest to get more people in EVs. But it’s not clear the administration is focused on education, so much as incentivizing the purchase of more EVs through tax credits, point-of-sale rebates, and direct subsidies to automakers.
Money, the policymakers have determined, will be the thing to spur this shift.
The Money Problem
The White House has said it will spend at least $15 billion to begin rolling out the new charging stations. But experts say far more will be needed to shore up charging infrastructure to meet the growing demands of EV ownership.
Wedbush Securities has estimated that at least $60 billion will be needed to build 500,000 chargers by the end of 2030, as Biden has stated. Another analysis by industry consultant AlixPartners said $50 billion would be needed to grow the US charging network to meet the demand within the next decade. That means Biden will need at least $35 billion more, either from private investments or state and local government matching funds, if he is to meet his goal.
Installing EV chargers can be expensive, depending on the level of charging that’s being offered. The higher the level, the quicker the charge and the more expensive it is to install. A public Level 2 charger might cost $2,000 out of the box, but a DC fast charger of 150kW or more can cost between $100,000 and $250,000, Nelder said.
The federal government could intervene in making those costs cheaper, but it won’t be easy. That would require sitting down with utility companies and regulatory commissions in all 50 states, as well as the private EV charging companies, to bring down the capital investments for charging stations through “make ready” programs. These are programs in which public utilities and local governments identify sites that are intended for EV charging and allow companies to submit bids for installation. There is no federal “make ready” program, though, and only a few states employ this method of fast-tracking EV charger installation. The Biden administration could make things a lot easier by creating a national system for states to use.
Another looming problem is utilization and how utilities charge for the electricity they provide. Most EV charging stations sit unoccupied because EVs still only make up a tiny fraction of the overall car market. That means the business case for building more chargers is very difficult to make.
“It’s that first few years where you want me to put a quarter million into a station, and then I can’t get better than 5 percent utilization,” said Henry Lee, director of the Harvard Kennedy School’s environmental and natural resource program. “I lose my shirt in the first three or four years, but by year seven or eight, I could be making money on it.”
Lee noted that the cost of electricity is another problem for EV charging companies. Demand charges from utility companies tend to dominate charging companies’ operating costs, further complicating the business case for building more charging stations. The total cost of electricity is higher based on the level of charging they provide. These calculations need to be rethought if the government wants to incentivize the EV charging industry.
“How you restructure this formula for these kinds of stations is something that we haven’t quite figured out,” Lee said.
The Tesla Problem
Tesla’s Supercharger network is often held up as the best possible example of an EV charging network: fast, reliable, and plentiful. But Tesla’s network is also exclusive to Tesla owners, meaning someone driving a Volkswagen EV wouldn’t be able to use it.
These kinds of close systems are worrisome if more car companies decide they want their own networks. But Nelder said that two automakers building their own exclusive charging networks isn’t necessarily an indication of where things are headed for EV charging. First of all, Nelder said that Rivian’s goal of 3,500 fast chargers in two years was almost impossible, given the intense and expensive amount of work that each site requires.
“I literally laughed out loud,” he said.
But even then, Nelder said he hopes that the Biden administration is strict about what kind of charging projects are eligible for public money. “To whatever extent public money is being spent, it should only be spent on sites that are available to the public,” he said, “and that’s certainly true for this Biden infrastructure spending plan.”
Ideally, more electric vehicles will include the Plug and Charge standard that was initially introduced by ISO 15118. This standard enables an EV to automatically identify and authorize itself to a charging station on behalf of the driver. For example, when it goes on sale in the US later this year, the Mercedes-Benz EQS will be compatible with about 90 percent of the public charging stations in the US without the need to download an app or sign up for an individual charging service, thanks to the Plug and Charge system.
But there are many EVs coming onto the market without Plug and Charge, such as the Volkswagen ID 4 and Chevy Bolt EUV, and it’s unclear why. This is another source of frustration and confusion for Aaron Fisher of EVPassport. It’s a sign that for all the good news on the horizon, the EV charging infrastructure in the US will remain opaque and challenging for months, if not years to come.
“It feels like they’re missing some very core decisions at a high level,” Fisher said of the auto industry. “I don’t know if it’s a lack of planning, or just they’re trying to focus on cars and getting them out the door.”
SteelSeries has debuted its Rival 5 wired, right-handed gaming mouse that is available right now for $59.99. From a distance, the Rival 5 looks similar to the modular Rival 600, but this is a simpler and slightly more ergonomically friendly mouse. The overall shape of the device seems to have a softer curve, making it fit a little more comfortably for palm or fingertip grip users.
This mouse offers plenty of controls, starting with the primary mouse buttons, an RGB-backlit scroll wheel, and a DPI switcher. Two thumb buttons are near the concave thumb rest; they’re flanked by an interesting toggle bar that can be customized using SteelSeries’ software to execute an in-game command when you tilt it right or left. In addition, there’s a silver-colored side button that, in my brief testing, felt easy to reach and feel for in games. SteelSeries says this selection of buttons makes it adept for multiple gaming genres.
The Rival 5 weighs 85 grams and features the company’s TrueMove Air optical sensor — the same one found in 2020’s SteelSeries Aerox 3 Wireless and the Rival 3 Wireless. It’s also getting the main mouse switches that have an IP54 rating to protect against some water and dust; the company claims these can last for 80 million clicks.
At $59.99, this seems like a good, budget-friendly option for people looking for a versatile mouse. We’ll be testing it more in the near future to see how it compares to the best gaming mice of 2021.
Amazon’s Alexa has a voice familiar to millions: calm, warm, and measured. But like most synthetic speech, its tones have a human origin. There was someone whose voice had to be recorded, analyzed, and algorithmically reproduced to create Alexa as we know it now. Amazon has never revealed who this “original Alexa” is, but journalist Brad Stone says he tracked her down, and she is Nina Rolle, a voiceover artist based in Boulder, Colorado.
The claim comes from Stone’s upcoming book on the tech giant, Amazon Unbound, an excerpt of which is published here in Wired. Neither Amazon nor Rolle confirmed or denied Stone’s guess, which he says is based on conversations with the professional voiceover community, but Rolle’s voice alone makes for a compelling case.
Listen to the videos below: the first an advertisement for Cherry Creek North, “Denver’s premier outdoor retail destination,” and the second an introduction to Hapyn, a social app that seems to now be defunct (its Play Store entry was last updated in 2017). You can absolutely hear Alexa’s reassuring tones in Rolle’s voice. Or, to be more precise, you can absolutely hear where Alexa’s reassuring tones come from when listening to Rolle.
Here’s how Stone writes up the process in selecting Alexa’s voice:
Believing that the selection of the right voice for Alexa was critical, [then-Amazon exec Greg] Hart and colleagues spent months reviewing the recordings of various candidates that GM Voices produced for the project, and presented the top picks to Bezos. The Amazon team ranked the best ones, asked for additional samples, and finally made a choice. Bezos signed off on it. Characteristically secretive, Amazon has never revealed the name of the voice artist behind Alexa. I learned her identity after canvasing the professional voice-over community: Boulder, Colorado–based voice actress and singer Nina Rolle. Her professional website contains links to old radio ads for products such as Mott’s Apple Juice and the Volkswagen Passat—and the warm timbre of Alexa’s voice is unmistakable. Rolle said she wasn’t allowed to talk to me when I reached her on the phone in February 2021. When I asked Amazon to speak with her, they declined.
We’ve pinged Amazon and Rolle to confirm her involvement in creating Alexa, but don’t expect to hear much back. If the company isn’t interested in confirming Stone’s account, it suggests this is a bit of history they’d rather not draw attention to, for whatever reason.
Providing the voice for such a ubiquitous product can have its own drawbacks, too. The original voice artist behind Siri, Susan Bennett, revealed herself in 2013 (after seeing an article from The Verge dissecting the process behind the creation of synthesized voices, incidentally) but said she’d been wary about being associated with Siri. “I was conservative about it for a long time […] then this Verge video came out […] and it seems like everyone was clamoring to find out who the real voice behind Siri is, and so I thought, well, you know, what the heck? This is the time,” Bennett told CNN.
Of course, although we can hear both Bennett and Rolle’s voices in their AI doppelgängers, it’s impossible to say without inside knowledge exactly what traces of the original remain. Creating a synthetic voice starts with real audio samples, but this data is exhaustively quantized and remastered to such a degree that answering the question of whether the final product is the same as the original is best reserved for the shipbuilders of Theseus.
What is, fun, though, is listening to the other examples of Rolle’s voiceover work on her website. Although she offers a restrained performance in the videos above, she’s much more animated and lively in other commercial samples. It really shows how, despite the ever-increasing sophistication of Alexa’s voice, it still lacks the range of the real thing.
Snowman is best-known for games like Alto’s Adventure and Skate City, but soon the company will be exploring a very different realm: children’s apps. Today, Snowman announced an upcoming game called Pok Pok Playroom, which will be launching on May 20th on the App Store. It’s a charming, minimalist experience aimed at kids between two and six, designed to encourage open-ended play through a handful of different digital toys. But Pok Pok isn’t just an app, it’s also the name of a brand-new creative studio spun off from Snowman that will be explicitly focused on making these kinds of experiences.
Pok Pok has been in the works for several years. It started life as a side project for Esther Huybreghts and Mathijs Demaeght, who were working as artists at Snowman while also raising two young boys. (The two now serve as creative director and design director of Pok Pok, respectively.) They wanted to find something where they could work together and explore their creativity outside of their day jobs. At the same time, they were coming to grips with raising two children and figuring out how to introduce screentime in a healthy way.
“We wanted him to have some screentime,” Huybreghts says of her youngest child. “When we started looking for an app, we had a high standard of what we wanted. We didn’t want anything addictive, or loud and overstimulating. We couldn’t really find anything that was up to our standards so we decided to build something ourselves.”
They showed it to their co-workers, and it wasn’t long before it became a full-time production and, eventually, the focus of a brand-new studio. “It was immediately interesting,” Snowman creative director Ryan Cash says. Pok Pok Playroom features several different virtual toys, including things like a simple drawing tool and a board full of fun switches and buttons to play with. For the most part, the digital toys are inspired by real-world ones.
“We wanted to bring open-ended play to a device, and most of the toys we liked, in our real playroom, had that same open-endedness to them,” says Huybreghts. Melissa Cash, co-founder and CEO of Pok Pok, adds that “the choice to be inspired by some of these toys was very intentional, because these are timeless toys that have been in our lives for generations. We wanted Pok Pok to have that same feeling of being a timeless toy that grows with your children. They’re designed to reveal more things as your kids become more curious.”
That idea of open-ended play is core to Pok Pok. The games don’t feature high scores or fail states, or many other elements associated with a typical video game. Instead, much like a real-world pile of wood blocks or bucket full of die-cast cars, everything is left up to the player. “The goal starts with the child,” Huybreghts explains. “We don’t tell them what to do. Every game they play is led by them.”
In some ways, Playroom is most interesting for the things it doesn’t have. There are no in-app purchases to worry about — instead, the game will be available through a subscription of $3.99 a month or $29.99 per year, with a 14-day free trial. And the experience has been streamlined so that kids can play with it independently. That means no tutorials or text to trip them up, and a simple and clear UI where it only takes one or two taps to get to different places in the app. “It was a very conscious decision not to have any text, because we wanted an app that was as hands-off as possible,” Huybreghts notes.
The team also worked closely with a range of advisors, including teachers, early childhood educators, and sensory experts from the US, Canada, and Australia who consulted on Playroom. In one instance, for example, signs featuring gibberish text were removed based on feedback from advisors, so that young players wouldn’t get confused while they’re learning to recognize letters. “While we have worked very closely with them, they’ve never come to us with a really big critique, which would’ve been a bad reflection on us,” Huybreghts says.
Pok Pok Playroom launches next month, and the plan is to continue to update it after launch, hence the subscription. That means adding more elements to existing toys, as well as introducing new ones. The goal is to still remain relatively small and accessible even after these expansions. “We don’t want to give kids the Netflix problem, where you’re just scrolling and scrolling and you go to bed because you can’t figure out what to watch,” says Melissa Cash.
Cash notes that “a big part of our work starts when we launch,” as the team hopes to make changes and additions based largely on the feedback from players. But for Huybreghts and Demaeght, the launch of Playroom also marks the end of one unexpected journey. What started as a quest to build an app to keep their own family occupied has turned into a brand-new company and commercial product.
“It was never meant to be a big-budget project like this,” says Huybreghts. “If I had known, I would’ve added eyebrows and noses to the people.”
Tig Notaro filming a scene for Army of the Dead on a green screen. Photo: Scott Garfield/Netflix/
During the weeks Tig Notaro spent working on Army of the Dead, Zack Snyder’s new zombie-heist movie, one thought kept running through her head: How on earth did telling jokes lead me to this moment?
It was Notaro’s first action gig, and the 50-year-old was stepping into a role originally performed by fellow comedian Chris D’Elia, who had recently been accused of pursuing multiple teenage girls. The allegations against D’Elia had surfaced in June 2020 and within weeks, his agency, CAA, had dropped him and Netflix canceled his deal for an unscripted prank show. (D’Elia continues to deny the allegations.) By that time, Snyder was deep into postproduction on Army of the Dead, which follows a team of mercenaries led by Dave Bautista that infiltrates zombie-ridden Las Vegas to recover millions of dollars from a casino vault; the comic played a supporting role as a wisecracking helicopter pilot named Peters. In August, Snyder announced that he would digitally erase D’Elia from the movie and reshoot his part with Notaro.
It wasn’t the first time an actor had been replaced in postproduction after being accused of sexual misconduct. In 2017, Ridley Scott spent nine days reshooting Kevin Spacey’s scenes in All the Money in the World with Christopher Plummer. The same year, the animated series Gravity Fallsredubbed a character originally voiced by Louis C.K., a former friend and collaborator of Notaro’s. Snyder’s task was more demanding: It would have been a logistical nightmare to bring back Army of the Dead’s cast for reshoots during a pandemic, so Notaro would have to film almost all of her scenes in front of a green screen with no other actors in sight, and Snyder’s team would then edit them into the existing footage. (The director wouldn’t reveal how much this process tacked on to the budget — but he did say it was cheaper than creating the movie’s CGI zombie tiger.)
Notaro and Snyder look back on the convoluted filming process with a mixture of awe and relief. “We kind of knew what we were getting into,” says Snyder, “but had no idea how hard it would be.”
When Snyder’s casting director mentioned Notaro for the role, the director recalls, “My brain just went, Wait. Tig. Yes. That’s it. [Then] I’m like, ‘Do you think she would do this, though?’ ”
“I was so baffled,” Notaro says of learning Snyder wanted her for the part. “I felt like there was some sort of misunderstanding.” As an actor, Notaro had almost always played a variation on herself. The closest she had come to something like her role in Army of the Dead was in the series Star Trek: Discovery, where her stunt work was limited to falling onto a mattress. She’s also significantly shorter than D’Elia and has a wry, intimate sense of humor, while he skews bombastic and vulgar.
Snyder sent Notaro a screener of the movie, in which the editing, CG effects, and sound were nearly finished, and explained how the filming would work if she signed on. It was reassuring, but Notaro had seen D’Elia’s footage, too. “It didn’t seem possible for me to take on what Chris did. We’re such different actors and comedians,” she says. “I honestly thought, regardless of what’s going on in his personal life, that his performance was excellent. But Zack said, ‘We want you to do exactly what you do.’ And, in turn, that’s all I did.”
Notaro prepped for her role as Peters by studying the script — and learning how to handle a prop weapon. “I did firearm training over Zoom in my office while my children were playing Lego in the next room,” she says. “I hid it from them, not because they’d get hurt but because I didn’t want them to think I had a machine gun. That lasted probably 20 minutes.”
Meanwhile, the movie had to delete D’Elia to make room for Notaro. Her footage couldn’t be pasted over D’Elia’s — matching their movements beat for beat would be too complicated, and the actors’ size difference would make Notaro look unnaturally large. “I had to do this incredibly technical experiment, re-creating every scene, shot for shot,” Snyder says. “My visual-effects supervisor, Marcus Taormina, did the work of taking Chris completely out of the movie so Tig could have freedom [to move] within the scenes.”
The original footage had been shot in Albuquerque and Atlantic City in 2019. For the new shots of Notaro, which began filming in early September 2020, Snyder and the visual-effects team replicated the physical spaces and camera angles of the original scenes at a studio in Simi Valley, California, referencing the old footage on a monitor and using greened-out props, laser pointers, and tennis balls hanging from stands to approximate where Notaro should be looking. And there could be no ad-libbing: Notaro’s dialogue had to sync with the other characters’ reactions.
Save for a half-day shoot with co-star Ana de la Reguera, the scenes in which Notaro physically touches another character were either pantomimed or filmed with her assistant, Patrick McDonald, wearing a green suit. “They’d line up a piece of tape on the ground and say, ‘Okay, you’ve fallen in line with a group of people. You’re walking into a building,’ ” Notaro recalls. “I’d be like, ‘Is it kind of a mosey? Okay, I’ll mosey.’ Then Zack might say, ‘That’s a little too fast with the moseying,’ and we’d start over again.”
For the film’s climax, she pretended she was flying a prop helicopter away from a nuclear-bomb explosion while Bautista — who’d filmed his part a year before — battled a zombie behind her. “That’s where I’m like, ‘I am not a trained actor,’ ” she says. “I had to be yelling lines, I have a zombie in the back of my helicopter, I have to press the right buttons and flick the right switches. You’re sitting there with all these adults standing ten feet away while you’re alone, acting like you’re crashing. I thought, Oh my God, I feel like an idiot. Can we be done with this?”
To begin the next stage of postproduction, Snyder and his team sifted through all of Notaro’s footage to pick not only the best takes but the ones that synced with the dialogue and action. When things didn’t match up, they used a CG scan they had made of Notaro’s body to create a digital version of her they could insert into scenes, mostly for background shots.
“Some of the trickiest shots were where she’s walking in the group — I had to match the [camera] pans, and it was difficult to get the perspective to match,” Snyder says. “It was a few months to get all the individual effects and make it seamless. Marcus was able to fudge it around and get it to work, and [her footage] went in surprisingly easily.”
As Notaro jokes, filming this way went to her head. “Because I was the only one on set, I started to think I was the star of the movie. Then I told Zack that I realized, Oh, not only am I not the star, but a lot of these shots are me blurred out in the background.” Snyder rewarded her efforts with a fake Oscar statuette for Best Out-of-Focus Actor.
When Snyder first announced he would be replacing D’Elia with Notaro, many D’Elia fanboys were upset about both his replacement and what they saw as his undeserved cancellation. But when the first Army trailer was released in April, a quick shot of Notaro pouring gasoline in her pilot outfit, complete with aviator sunglasses and a cigarillo dangling from her mouth, immediately went viral.
“My phone started blowing up: ‘You’re trending on Twitter! Everybody’s talking about how you’re sexy AF,’ ” she says. “I was so confused. I really thought there was going to be a backlash from me replacing Chris. I didn’t think I was going to be trending for being a badass.”
As surges of COVID-19 cases spread throughout the United States over the past year, hospitals overwhelmed with patients quickly ran out of everything: masks, gloves, beds, space, doctors, nurses. Hospital workers in New York City, one of the first epicenters of the US pandemic, wore garbage bags as protective equipment. Patients in overcrowded California emergency rooms spent hours lying in hallways. Nurses in Missouri worked twice the normal number of shifts to make up for sick colleagues.
The catastrophe was clear early on in the pandemic, and experts gave clear warning that thereweren’tgoing to be enough hospital beds available for everyone who needed one. But even with that warning, there wasn’t enough time to build up capacity before the wave of COVID cases broke and the demand far outstripped supply, particularly in intensive care units. In the end, hospitals were only able to withstand the surge with considerable cost to overworked doctors and nurses.
Now that the immediate emergency is subsiding, those same hospital and ICU beds are getting a closer look. COVID-19 won’t be the last threat to stress-test the country’s fragmented, private health care infrastructure. But getting hospitals ready for the next disaster isn’t just a matter of spending money. Real preparation will require changing the way hospitals work together, changing them from isolated enterprises into a collaborative network — in other words, thinking of hospitals as infrastructure.
With Biden’s infrastructure push gearing up, that rethinking may have to come soon. As it makes the case for the American Jobs Plan, the White House has talked about making hospital networks more resilient, alongside similar efforts in food systems, transportation, and the electric grid. But the only tangible measure detailed in the bill is new funding for VA hospitals, the only public medical facilities directly operated by the federal government. If no new measures are added, the White House may end up missing its chance for the more profound change many experts say is necessary.
“I think that there probably needs to be a re-analysis of the day-to-day healthcare resources of communities, and rethinking those with the idea of a surge in mind,” says Michael Redlener, an associate professor of emergency medicine at Mount Sinai Hospital in New York. “The healthcare system is designed to meet everyday needs. It’s not really conceived of as something for a surge.”
The simplest physical signal of how ready our hospitals were for COVID-19 was the number of beds they had ready for sick people. Before the pandemic, the US had around 2.8 hospital beds per 1,000 people, according to one analysis, which is far fewer than countries like Germany, France, and Japan. Each day, most of those beds were full, which didn’t leave much slack in the system for a sudden influx. Some states had less room than others: Connecticut, for example, only had 0.45 unoccupied beds per 1,000 people, according to the analysis.
Those numbers used to be higher: over the past few decades, the number of hospital beds per 1,000 people in the US has steadily declined. Hospitals have eliminated beds, health care systems have closed down inpatient care centers, and small, independent hospitals have shut down. The trend is particularly acute in rural areas, where over 100 hospitals have closed since 2013.
In large part, that decline is a feature of the US for-profit health system, which pushes hospitals to keep overhead low. Keeping a bed open and ready to go costs money; if there’s not a patient to fill it, it’s seen as a cost waiting to be cut. Building excess capacity can be bad for business.
“In a fee-for-service system, hospitals know what their patient flow looks like, and they’re prepared for that. They’re not always prepared for the worst case scenario,” says Fredric Blavin, a researcher in the Health Policy Center at the Urban Institute who studied hospital capacity at the start of the pandemic.
On the flip side, keeping too many beds open could incentivize hospitals to fill them — giving potentially unnecessary care and costs to patients. Given that model, it isn’t realistic to maintain the space for hundreds of additional patients all the time, says Michelle Mello, who studies health law and health care delivery at Stanford Law School. For Mello, preparedness is more about improving communications between hospitals as they manage the resources they have.
Mello is based in Northern California and says that there wasn’t a good way to route patients to nearby hospitals if the one they showed up at was full. “There wasn’t a great system for sending the things you might need to take care of COVID-19 patients,” she says. The same was true in New York City, Redlener says. “If you talked to someone in Queens during the height of the pandemic, there weren’t enough ventilators for every patient who needed one. But if you looked at the region, there was probably enough capacity to take care of everyone,” he says.
When COVID-19 cases surged in 2020, small hospitals were quickly overwhelmed. They struggled to find larger medical institutions to transfer patients to. There’s no data system that offers easy visibility into the beds and resources available at various hospitals in the US, so hospital staff in many places relied on working the phones to find space for patients. In theory, the hospitals were coordinating with state and local health departments — but in practice, they were mostly operating on their own.
That lack of collaboration also makes economic disparities worse, since wealthy hospitals are more likely to have stockpiles of supplies on which they can rely. “A wealthy hospital might say, ‘I don’t know what’s coming, we could have a surge tomorrow, I don’t want to share.’ On aggregate, that creates huge inefficiencies because everyone’s looking out for their own good,” Mello says. “It cries out for a more cooperative approach.”
A more cooperative plan might be modeled off of something like New York’s regional burn plans, which are designed to respond to a fire or disaster that leaves a large number of people with serious burns, Redlener says. Those plans involve groups like the fire department, burn centers, hospitals, and the Department of Health; they make sure that there’s a way to create beds for burn patients and have specialists available to see them. “A plan exists, and it thinks about everything in a multidimensional way,” he says.
There have been some bright spots of collaboration over the past year, says Nancy Foster, the vice president for quality and patient safety policy at the American Hospital Association: federal and state authorities worked with hospitals to distribute monoclonal antibody COVID-19 treatments, for example. Early evidence showed that the drug, which has to be given to people just after they’re diagnosed with COVID-19, can blunt the severity of illness. Rather than let individual states and hospitals order it, the US Department of Health and Human Services allocated it out to states based on the number of COVID-19 cases they had over the previous week.
“Those kinds of opportunities to sit together and to really appropriately allocate resources in an effective manner are extraordinarily helpful,” Foster tells The Verge. “They’re part of what we have to think about going forward.”
Making sure that spirit of collaboration continues after the acute emergency of the COVID-19 pandemic, though, could be difficult — again, because the US health care system is private and institutions typically act independently. It’d be a challenge to mandate coordination from the federal level, but there could be incentives for organizations to participate, Mello says. “There are already efforts to do these things on a voluntary basis,” she says. That principle holds true for other types of emergency preparedness, like flexible staffing or better supply management, Redlener says. “There’s a balance, and a push-pull between mandates versus incentives to participate,” he says.
But reinvesting in programs that would make the health care system more collaborative — and, therefore, more resilient — benefits everyone. “We truly don’t know what the features of the next disease will be and what the needs will be,” Mello says. “That should create incentives for people to cooperate and hedge their risk.”
Today we’re focused on one of the most complicated problems of all: content moderation. If you’ve kept up with Decoder, you know that content moderation seems to come up almost every week — the question of how platform companies decide what to leave up and what to take down is messy, controversial, and extremely political.
And if something on the internet is messy, controversial, and political, you know that Facebook will be at the bleeding edge of it. Last year, the company announced that it would send difficult moderation problems to a new entity it calls the Oversight Board — a committee made up of lawyers, politicians, and speech experts that would rule on whether specific content takedowns on Facebook were appropriate.
That board just got its first big test last week, as it issued a decision about whether former President Trump’s indefinite ban from Facebook platforms would stay in place. And the decision was to kick the issue back to Facebook — the board said Facebook didn’t have an actual policy in place for it to review, and that Facebook should make one and send it back to the board in six months. In the meantime, Trump remains banned from the platform.
So what does all that mean? What is the Facebook Oversight Board, what are its powers, and how is it even independent from Facebook itself? You’ve probably heard people call it the Supreme Court of Facebook — is that the right way to think about it? Will every platform require a moderation court like this in the future, or is this just another way for Facebook to exert influence over the internet?
Kate and I talked about what the board is — and isn’t — what its powers are, and what this decision means for the board’s authority in the future. And we talked a lot about what it means for private companies to have things that look and feel like legal systems — if you step back, it is bonkers to think that any company needs to fund anything that looks like a Supreme Court. But Facebook is that big and that globally powerful. So here we are.
One note, I mention a Supreme Court case called Marbury v. Madison — that’s the very famous case from early in the county’s history where the Supreme Court basically gave itself the power to invalidate laws passed by Congress. Oh, and you might pick up that I’m nervous here and there — that’s because I always, always get nervous talking to law professors. I feel like I’m back in my 1L days every time. Bear with me.
Okay, Kate Klonick, from St. John’s University Law School. Here we go.
This transcript has been lightly edited for clarity.
Kate Klonick, you’re a law professor at St. John’s University Law School. You are also one of the foremost chroniclers of Facebook’s moderation efforts. Welcome to Decoder.
Thank you so much for having me.
We’re talking the day after the Facebook Oversight Board released a big decision about whether Facebook was correct to indefinitely ban Donald Trump from its platform. There’s a lot of concepts in that sentence alone. Let’s start with the Facebook Oversight Board itself.
In February, you published a long piece in The New Yorker called “Inside the Making of Facebook’s Supreme Court,” which detailed the process by which Facebook conceived of having an oversight board, literally the meetings and the software they use to create this board virtually during a pandemic. This decision feels like the first big moment for that board. What is the Facebook Oversight Board?
We still don’t know exactly. I know that’s the worst answer ever to start out, but I think it’s the right one. So we can talk about how it’s been talked about, and I think that that’s going to lead us to what we saw yesterday and what we can make of this opinion.
So in November of 2018, Mark Zuckerberg announced that he was going to set up this, what had been colloquially joked about and called the “Supreme Court” of Facebook. This idea that they were going to start running certain types of content moderation decisions. Once they had finished being appealed internally at Facebook, they would make it possible to appeal to an outside, independent oversight board.
And the question was, how the hell do you set that up? You have a pretty fundamental principal-agent problem right off the bat. And then how do you set that up? What does it make you do? And how do you make this work when you’re talking about public rights of freedom of expression and international human rights law? This is where the conversation was and still is, to some degree. And you’re talking about a private corporation. How do you make something have teeth? How do you make it legitimate? How do you make it independent? All of these questions were very open and on the table.
And so about six months after that announcement from Zuckerberg I started following, inside Facebook, the government and strategic initiatives team, which was the team that had basically been tasked, under Nick Clegg, to come up with this solution to what exactly this board was going to be and how they were going to solve all of these institution-building problems.
And so I started following that and I ended up doing that for 18 months, watching as they wrote their documents and figured out what they were going to do. And there’s a lot of stuff to unpack just on how they did make decisions. And so I’m happy to go over the basic framework of how they solved the problem of independence. Because I think one of the biggest things that we don’t know about the board is what it is, and what it is indebted to for Facebook, and what makes it plausibly independent?
Let’s step back for one second. So Facebook is a huge company. They operate a massive platform, several massive platforms, around the world. Their content moderation decisions have an enormous impact on people, on culture, on democracy. They wander into speech issues at a global scale where a team of people in the United States cannot plausibly understand it — speech issues in other countries that Facebook runs into at scale. You brought up human rights law, some of those things directly lead to horrible outcomes like genocides, literally, with Facebook.
Mark Zuckerberg is unaccountable to the shareholders of Facebook, the corporate structure. So it’s a very unique company. He owns a majority of the shares. He can’t be removed as CEO, fundamentally. And so his solution is, “I’m going to set up a different thing to hold Facebook accountable and to review our content moderation decisions.” And it sounds like what you’re saying, the first problem is how do you create that thing to be independent? And how do you pay for it in a way that maintains its independence?
How did they solve that problem?
Well, that problem was interesting. So they decided to set up a Delaware trust corporation. They set up a Delaware trust corporation in October of 2019, and then the next day they arranged for that trust corporation to serve the Oversight Board, LLC, which is a limited liability corporation.
The entire purpose of the trustees, and the entire purpose of the trust, was to administer a $130 million irrevocable grant that Facebook gave and then snipped the purse strings from, to this trust. It’s not an endowment. This is an important distinction because they cannot, actually, the trustees can’t invest the money. There’s no investment committee. It’s specifically not allowed. It’s not enough money to be able to endow it. It is enough money to probably have it run for five to six years, and then it is contemplated that there will be an endowment.
So there is the question of well, if they really do go at Facebook and really do hold Facebook accountable in some way, they are running the risk of having their funding cut off. But in the short term, six years feels like a long time, and this is what they were charged to do. So we’re seeing what the board is doing. But this is basically how it works. All of the board members and the administration are basically employees of the LLC, which is itself controlled by the trustees. So that’s how this breaks down from a business standpoint.
It was a fairly elegant solution. The trust documents are pretty interesting to read if you’re into that kind of thing, which I wasn’t but I had to do anyway. So I think there actually is, for right now, for the next five to six years, I think there is a fair amount of financial independence. We’ll see what happens four years down the road.
Who’s on the Oversight Board? We keep talking about it like it’s a court. I don’t know if that’s correct. I want to get there. But they have some people who are making decisions and writing opinions and disagreeing. Who are those people?
Right now the board is four co-chairs with a total of 20 members, that all are basically hearing cases related to user appeals that come out of Facebook on content moderation, or things that Facebook itself kicks to the board to review, like the Trump suspension.
So those 20 individuals, it’s a pretty illustrious group of people. There’s a Nobel peace prize laureate, a former prime minister of Denmark, former editor-in-chief of The Guardian, a former circuit court judge, a former Supreme Court clerk and Columbia law professor. Not to mention a ton of other people that are just experts in human rights law and lawyers and freedom of expression in their own right. So it’s a pretty well-staffed group that has a lot of experience, both with institution-building and freedom of expression and international human rights.
So for the Trump decision, did all of them hear it and vote, or was it a small group? I mean, normal courts, you go to the appeals court, they have a lot of judges, but the first three of them hear it, right?
Yeah. Usually. So that’s precisely what happens here. So the process around hearing cases is a five-person, randomly selected, anonymous panel hears the case. And then right now how they basically have this working, because some of these things were not laid out in the bylaws and it’s up to the board to decide them on its own, is basically that right now the panel is writing up a draft of whatever it is that they determine is the right solution to the problem. After, they arrange and collect facts and ask people for more opinions and read all the briefs and everything else. And then they start circulating that. And a majority of the full board has to approve the final decision. And that was basically exactly how it worked for the Trump case.
One thing that struck me reading the Trump decision is that it is anonymous. There’s frequent reference to the minority, and how the minority would have judged things differently. We don’t know who’s in the minority. We don’t know how big the minority is.
We don’t know how big but it can’t be bigger than nine people.
Sure. Yeah. But that’s a pretty big range from one to nine. And that’s all fine. So it’s pretty anonymous. And then the members of the Oversight Board are out in the world. They’re doing interviews with Axios. They’re doing interviews at the Aspen Institute. They’re publishing their own blog posts about this decision. I can’t quite understand what part of it is supposed to be anonymous and why, and what part of this is really public.
Yeah. No one knows. That’s a great question. All day yesterday I was a little shocked at all of the different types of thoughts and collective pronouns that were being used when people were talking. One of the main reasons to have anonymity on a panel like this is that anonymity gives you a certain amount of intellectual privacy. The idea that you’re not going to be publicly shamed for being in the minority is pretty key.
The other thing is that these people that are on the board are all over the world. And a lot of the cases that they are touching on possibly pose really very real security risks to them, should their names be attached to the outcomes. And so that was another consideration for when all of this was being discussed about whether they should be anonymous or not.
What we didn’t know is whether there were going to be dissents. And so what’s super interesting here is that there’s no dissents. There is one decision and then they’ve decided to fold the idea of a dissent or a concurrence into these notions of a minority of the panel, a paragraph that says, “The minority of the panel felt differently about the reasonings for this.” So that was fascinating.
So that would be one thing, if it was just getting read from the bench and that’s all you heard about it, and you let the reasoning of the opinion or the decision stand on its own. But you didn’t. You then had everyone off tweeting all of their thoughts about everything and talking. And I think that that’s one of the things. I even asked one of the people who came on my show that day, I was like, “What’s going on?” If this was a court you wouldn’t be doing a TV hit to talk about what the conversation was like in chambers. That would be totally verboten. So it is like, “Well, are you a court? Because this sounds a lot like a court but you’re not exactly comporting yourself as a court.”
I don’t know, the Supreme Court, doesn’t say, “Our big decision in this massive policy issue is coming out tomorrow at 9AM. Everybody get ready.” And then queue up its press hits. I keep coming back to that question. Is it a court? We’ve all called it this Facebook Supreme Court. It seems right, conceptually.
Facebook has difficult decisions to make. It doesn’t seem to want to make them or be accountable for them so it’s going to kick them over to this other place. They will take the hit and Facebook will presumably do what they say. But if you’re going to be that kind of institution, there’s just a part of it where that’s not how courts actually act. But just conceptually, I can’t tell if this is a court or not.
Yeah. I think they’re there with you. I think they’re still deciding what they want to do. So for example, just to put it right on the nose, they talked to me, a number of the board members, on condition of anonymity. They talked to me for The New Yorker piece explaining some of their first decisions and how they had reasoned through them and what they had thought. And that was a level of access that a court would not usually grant to a reporter or anything else. And after that, they decided not to do that anymore.
So there are changes that they’re making to their policy. It’s not set in stone. I’ve heard a lot of people wondering, “Well, this is an incredibly well-reasoned and rigorous legal opinion, so this is most definitely seeming like a court.” But the way that it’s being talked about by the people who are making this decision is not quite the same as what we expect to see out of courts that, traditionally, we’ve seen in the United States.
What is the board’s power over Facebook? What can the board make Facebook do, specifically, if it’s unhappy with how Facebook is acting?
So the power that the board has over Facebook is incredibly narrow, but it is a small devolution of power that I’ve always argued is actually a lot bigger than it seems. For right now, for content that is removed or kept up on Facebook, after a user has appealed it internally with Facebook, they can take that appeal and give it to the board. The board hears their case and makes the determination overturning Facebook’s decision to keep down the content or put it back up. And it’s only content, single-object content. It’s not even pages. That was a special thing that the board considered for Trump. Users can’t appeal their page takedowns right now.
Facebook agreed to adhere to whatever the board’s decision is on that specific piece of content. So my specific piece of content, my picture of my dog that gets accidentally removed from Facebook and the Oversight Board says it has to go back up, that has to go back up. But if you had also posted a picture of my dog and got it taken down, they have no obligation to restore your content. They have said that they will make efforts to restore similar content, but there’s no promise. That’s it.
That’s very narrow.
It’s super narrow.
But they’re not actually saying whether the policies are right or wrong. They’re saying whether the enforcement of policies is right or wrong.
Yes. But there is one other thing that they obligated themselves to do, which is that when the Oversight Board makes public policy recommendations, Facebook has obligated itself to respond within 30 days to the Oversight Board’s policy recommendations and whether they have been implemented or not, how they have been implemented, or if they weren’t implemented, why not?
And this is this form of weak-form review, is something that Harvard law professor Mark Tushnet puts it this way. That it’s the court calling on basically the executive, the legislative body, to come back in and fix their problem, and then report back as to how they ended up doing that. And so that is actually also a pretty important reputational pressure that the board can put on Facebook.
So this leads right into this decision. The simplest way of understanding this decision is: The Capitol riots happened. Trump posted a bunch of videos and posts that ostensibly encouraged people to stop acting badly at the Capitol, kind of supported it, pretty messy, actually, in terms of just the straight interpretation of what Trump meant to have happen because of these videos and posts.
Facebook tries to take them down. And he tries again. They take them down and say, “You’re indefinitely banned from Facebook.” This is the 6th and the 7th of January, a very chaotic time in America. All the other platforms are doing the same thing. Facebook then says, “Well, we got this board. We’re going to kick the indefinite ban of Trump to the board to see if that was the right decision.”
And the board comes back and they say, “Fine. It’s fine for you to have indefinitely banned him. We’re not going to write a policy for you on indefinite bans. Also, you have no policy on indefinite bans.” And they seem very unhappy with Facebook. There’s actually a tone to me in this decision that’s like, “Don’t put this on us.”
And I think you could read that [as], “Come back to us in six months with a real decision. And we’ll tell you what to do.” And that reads to me as asserting its authority in a way, but it’s also really, really narrow and kind of punts the issue.
I think it punts almost not at all, actually.
I think that that’s been one of the worst takes that’s come out of this. It’s not punting the issue at all. Because the issue’s coming back. What they’re saying is, we’re not going to carry water for you, Facebook. How I think about it is this, that one of the conceptions of what, as we’re talking about, what is this board? Is it a court? What is it?
One of the questions is what type of, level of, court is it going to be? If it’s a trial court, then it’s a fact-finding body and it does all of this work. And maybe it makes statutory interpretations of rules.
Or is it more like a Supreme Court or court of appeals court, which is going to actually review the law and whether the law matched the facts of the thing? And what we see out of this decision is they are doing all three. I want to just hit on this really quickly. You talk about the fact that they lay out the events of the 6th and the 7th of January and that those were really hectic, crazy times.
Do you know what a gift it is to have a cogent, rigorous, well-researched record of everything that we know happened on the front end, but matching that with Facebook going on the record in the backend of what it was doing and how they were doing this? For the last 20, 15 years, we’ve only had people leak stuff out of the companies to tell us this type of thing. There has been no process for this. It was like a breath of fresh air to have that to resource going forward.
But that’s kind of a trial court type of hat. And so then they kind of get into this appellate court type of hat. And they’re like, “No, we’re not a legislature for you. And our mandate is to see what your rules are and whether or not they were consistent with your values and international human rights standards. And if you don’t have a rule, we can’t do that. And we’re not going to make one for you because that’s not our job. And we know what you’d like us to do. You’d like us to basically make this somebody else’s problem for you. But we’re not going to do that. And we resent that you would even ask us.”
And I just thought it was a powerful, powerful response, really rooted in the rule of law and procedure, instead of getting sucked into the vortex of intractable online speech problems and the definition of newsworthiness and public figures and things like that, [which] is never going to get you anywhere.
Let me push back on that. I find myself really wondering what the limit of authority for this Oversight Board is.
I think they punted specifically because they said, “You need to come up with a proportionate response to Trump’s actions and come back to us in six months with that response based on any rule,” which sounds like, “Someone’s got to reinstate Trump to Facebook and it’s not going to be us. We’re telling you that an indefinite ban is not acceptable. There’s no rule that says indefinite bans exist. But we’re not going to tell you the term of an indefinite ban.” The temptation to say that the board claimed for itself a greater authority by not making a decision is very high.
I see a lot of lawyers making that comparison to Marbury v. Madison in the Supreme Court. There’s this big historical parallel that seems very tempting. But here, narrowly, it’s just like, they didn’t want to be the ones to put Trump back on Facebook.
Yeah, I think that that’s probably also right. So here’s what I’ll say about what the term proportionality means in an international human rights context. Which is basically, the idea of proportionality is that you have some type of ability to atone for your punishment, or you have some type of proportionate response to whatever it is, the underlying problematic act that you’ve committed.
And it’s not clear to me that it is possible to have a permanent suspension of someone’s freedom of expression on a platform, or ability to be on a platform, and have it ever be consistent with international human rights standards. A permanent suspension is basically de facto disproportionate.
But I think that it’s a good argument. Because I do think that may be right. But they want that finality of that decision. They want Facebook to have said that that’s what’s at stake. And they don’t want to have to say it for them. And maybe it’s like punting the issue because they don’t want to deal with it.
But they’re going to have to deal with it one way or the other when this comes back.
But this is where I think that the wishy-washiness really gets me. Shouldn’t they have said, you’ve got to un-ban him until you come up with a rule that would properly ban him, as opposed to this reflexive reaction to, something bad’s happening and he’s making it worse. And we’re going to turn him off?
Yes. But I think that it’s so messy. But this is almost an interesting question of administrative law, which is that they defer to the decision of the underlying body. Right? And so, they agreed that they made the right decision at the time to take him down. And it’s not clear that enough time has passed, that that problem of imminence, or dangerous organization affiliation, or lauding dangerous organizations are passed.
And so that was the other part; “Well, we can’t make that decision. You have to do that.” That part was the puntiest of all of it, I think. You’re right. They could have basically reached some type of determination that was the opposite, around the ban. And that part, they absolutely did put it back on Facebook to do something one way or the other.
One of the things that really came up in this decision a lot was the board asked Facebook, “Have you ever applied this exception called the newsworthiness exception to Trump, where he’s doing something that breaks your rules, but because he’s the president and he’s newsworthy, you’re giving him a pass?” Everyone assumed that Facebook, this was their justification. Facebook said, “No, we’ve never applied the newsworthiness exception,” which, A, I know you have some strong feelings about newsworthiness as a concept, but, B, that is a big surprise.
It’s not a surprise, because I don’t think that they’re lying. I think that they didn’t technically apply their newsworthy exception to Trump. Trump is instead on a special list of newsworthy people, so it’s a different standard. They didn’t lie. They have different rules for different types of people, for people that are high-profile, people who have certain numbers of followers. We’ve known this for a long time. We’ve never had access to this list. We don’t know how it’s administered. We don’t know what goes into deciding content that those individuals say, necessarily. So this is one of the best parts of this, is kind of, “You have to tell us how this all works, because this doesn’t make sense to us, and it seems like you made it up as you went along.” So this isn’t a standard at all, and so we can’t even review it.
I think that this is great, because, I mean, I wrote a paper in 2018 called Facebook v. Sullivan, which kind of was supposed to be a little play on New York Times v. Sullivan, which established the public figure and newsworthiness kind of considerations in First Amendment law. It toyed with this idea of, “Well, how are they possibly defining newsworthiness and public figures?” I talked to a number of former policy people that had left the company in 2013 specifically around this newsworthiness question.
Because they thought that this was an intractable standard that could never be consistently applied and was always going to be a question of, “Newsworthy to who?” That was always going to be a group of people in Silicon Valley, and that was bullshit. Then a group of people that ended up being the winners, just wanted to use it as a way to ad hoc make determinations on a case-by-case basis. I love that it’s coming up and that this is something that the board raised, because I think it’s just absolutely fascinating.
That special list that Trump was on, we don’t know how big it is, right? But Trump, his relationship to Facebook and to Twitter, he’s always sort of gotten his own space, right? It’s always been nuclear to moderate Trump in any way, shape, or form until it crossed the threshold of January 6th.
Is this decision from the board saying, “You can’t have those kinds of soft exceptions anymore. You have to treat everybody the same,” or is it, “If you’re going to have exceptions, you have to be clear about them”?
I would say the latter, but I don’t know. They might say that it does not comport with international human rights law and principles of law to have two different sets of speech rules for people.
Even if you’re the President of the United States?
Even if you’re the President of the United States. Or they might say that you are allowed to have different classes and types of things, but you have to be consistent about what classes people are in, and you have to tell us what it means to be put into this class. I mean, so one of the things that I wrote about with some of my research is at some point, they used to define public figures by the number of Google hits you had or the number of times you showed up in Google News.
They would use Google to determine whether or not someone was a public figure or a combination of that and how many people followed you on the platform. But these are standards that…they changed all the time. I have no idea where that standard is now, because we’ve had no transparency into how it changed or where it’s gotten into. So I don’t know. I’m excited for this conversation to have gotten to such a sophisticated place finally, after the last 10 years of nonsense, and yeah, I’m really looking forward to it.
There’s just a part of this where Facebook gets to pretend it’s the most important thing in the world all the time. And it’s created this board. And here we are talking about it as though it’s a supreme court.
And right next door is Twitter. And they’re like, “Yeah, we’re just banning the guy. He’s gone. We’re not telling you if he’s ever coming back. Maybe he’ll never come back. You’re just never going to know.” And there’s no process by which Trump or any of his team can appeal that decision because Twitter is a private company, it’s their platform. They can do what they want.
Facebook is trying to create this other thing that provides moral, legal, spiritual justification for a snap decision. I just can’t tell if that is correct, or whether it’s fundamentally distracting, or whether everyone should have a giant oversight board.
Yeah. I mean, I think that all of those things are the things to be thinking about. I think that that’s all I’ve been thinking about for the last two years. But I’ve been by myself.
That’s why I wanted to talk to you.
So this is so much nicer than just sitting in my apartment, staring at my dog being like, “Why won’t you tell me what to do about the Facebook Oversight Board?”
So I think that that’s completely correct. So the way that I think about this is that Facebook has basically chosen a path of governance. They are in — and so is Twitter — an intractable kind of situation in which they are forced to make these terrible content moderation decisions that govern and have huge public ramifications and that compromise human rights, like freedom of expression and safety.
And at the same time, they’re privately held companies. And they’re privately held companies that operate transnationally and in the exchange of information. And so they are basically, they are a pretty big deal. They’re pretty much outside, in strength and power, outside the ability of any one country to shut them down everywhere. Not even the United States could shut down Facebook or Twitter everywhere. They could shut it down in the United States briefly if they really wanted to. But that’s about it.
And so I think that, you have to galaxy-brain yourself a little bit and make yourself go to a new place of, what is the world going to look like if entities like this exist? And they’re governing public rights? And we need to figure out a way to democratize or hold accountable private companies governing public rights, especially when there’s public rights like freedom of expression, or rights that you have traditionally excepted from government control. Because governments are traditionally bad at telling us what to do with our speech, and dangerous when they tell us what to do with our speech. So it’s hard to make government the solution to this either.
And so I think that Facebook chose a governance path with the Oversight Board. And I think that they hoped that they would be shoving off these substantive decisions. What we saw yesterday was that that didn’t work out quite the way that they thought, or at least it isn’t so far.
I mean, there’s a couple of things that could happen out of this. The Oversight Board could be a total distraction. We never get anywhere with it. It gets disbanded in five years or whatever. But at least it was a pretty noble experiment and gave us a new valence or way of thinking about how to solve some of these problems.
The other thing is that it could take hold in the idea of people being entitled to boards like this, or [that] being able to avail themselves of boards like this becomes something that is either mandated and regulated by governments — right now Canada’s contemplating mandating the creation of oversight boards within their country for these speech platforms — or it’s something that the public simply demands of these speech platforms. And they have to put them in place themselves, and Twitter gets forced into it because it becomes the next wave of how we deal with platforms.
I have no idea. I think about this all the time. But I think that, at least for now, it looks like the Oversight Board is pretty serious. It really could have been people taking their checks and rubber-stamping Facebook’s decisions. And it could have been nothing. But a 40-page decision that cites international human rights law and outs Facebook for not answering questions that were posed to it after it created this board to begin with. I think that this is a pretty serious — Well, right now, it looks like a pretty serious group and decision.
So the board submits a bunch of questions to Facebook as part of its Trump decision-making process. Facebook just says, “We’re not going to answer seven of your questions.” Is Facebook allowed to just not answer the board’s questions?
See, I love this. This is like, people and my students are being like, “Is this legal? Can they do this?” And I’m like, “Anything is legal or not legal until someone tells you not to do it.” Can they do this? Well, they just did. And I don’t know what we do to stop them. I mean, what we do to stop them is the board tells them they can.
And the board did that as much as they could, and they made it public, and it’s not been well-received that they didn’t answer those questions. And I think that there is probably definitely a number of people at Facebook right now that are panicked over how to move forward with more requests from the board in the future. But If you think about it from the perspective of, if a government was called before a court to answer questions and the government said, “I’m really sorry, court, we can’t tell you.” It just wouldn’t fly. You’d be in contempt of court. That’s the end of that.
I think that the interesting thing here is that it ends up being a real question of legal realism and public pressure and reputation for the company, like how bad it’s going to look if they spent $130 million in two years and 20 brilliant people’s time to do this, and then they don’t pay any attention to it.
One of the things that strikes me as you describe that process is, the United States Supreme Court likes to say that it makes very narrow decisions, but it actually has sweeping authority over American public life. Should our schools be segregated or not? The Supreme Court said not. And they kind of make these decisions that have sweeping impact over our lives. And they actually kind of restructure society.
The board’s power here is limited to, “Well, you took something down, but you should put it back up.” And it seemed like in this decision, they want the additional power to say, “How does your algorithm work? What do you see your algorithm promoting or disincentivizing or otherwise modifying in the conversations had on Facebook? And how does your business model plug into that algorithm and how do you make decisions about it?” And they just don’t have that power, and it seems like they really want it.
Completely. I also got that out of the opinion. I was in the House of Lords testifying about the Oversight Board with Alan Rusbridger, who’s on the board, former editor-in-chief of The Guardian. He basically said in his House of Lords testimony was that they were coming after the algorithm. And I was like, “That’s interesting. That wasn’t in the charter.” And so it was kind of foreshadowed, but I saw that in the opinion as well and I think that it’s really interesting, and I think that it is absolutely the right question because as Jack Balkin says, it’s the crown jewels of how all of this works and what the real power source of Facebook is.
I know for a fact, because I’ve read the founding documents so many times, that it is not contemplated at all that they’ll have any type of visibility into that, but I’ve always argued that that doesn’t mean anything. Just for the same reason you’re talking about it not being legal, there’s no reason that they can’t use their public pressure and authority and sway, which is really all that it is, to start asking these hard questions. And I think this is pretty soon to do it, but I think that it’s great that they’re going in that direction.
That leads me into the connection between, maybe five years from now every company will want an oversight board, or governments will demand that you have such a contraption connected to your company.
But if your expertise and your precedent is all about Facebook’s algorithm, then how on Earth can you connect that to TikTok or connect that to YouTube, which have wildly different business models, wildly different algorithmic inputs and outputs? These are different products. They have different formats. They have different business pressures.
Oh, you don’t. And I would actually say that that would be the worst possible outcome is to have one oversight board.
Well, I think this one kind of wants to be the one oversight board. They don’t even call themselves the Facebook Oversight Board. They’re just the Oversight Board.
That’s true. I thought that that was weird. And honestly, you want to know why I think that they did that? I think that they all are so…the name Facebook is so toxic that I think that they don’t want to be associated with it by name. And so this is something I read about in the Yale Law Journal article that I went into, which is the different ways that this could play out. And one of the ways is that basically, the trust corporation that I’ve mentioned before, that Twitter dumps $130 million into that and forms their own oversight board to apply Twitter’s terms of service and Twitter’s community standards to whatever it is that Twitter wants to set those at.
Or, Twitter makes its own, which is just as easy, I think. Twitter makes its own trust and its own oversight board and endows that and does their own thing.
And finally, the last thing is I think it would be terrible for freedom of expression globally if we started to have one set of merged standards that came together, that were all the same industry standard and that you couldn’t have nudity on one platform and no nudity on another platform, or something like that, if that’s what you decided. I think the differentiation is key to preserving freedom of expression. But I think that you’re exactly right. That’s one of the things that was specifically contemplated by Zuckerberg when he started to create these documents.
Right. I think he said, “Well, maybe someday, other companies will use our board.”
He literally made the documents so that you could control-F, replace all, Facebook for Twitter. They’re really meant to be usable breakouts for other companies.
I mostly agree with you that it would be bad to have one weird public, quasi-private entity controlling all speech in America. It just seems bad on its face. On the flip side, there’s just an enormous amount of instability in people’s expectations of what they can do on services. The First Amendment means that it’s a free-for-all, which is good. The government can’t make speech regulations.
But the idea that Twitter will take something down and Facebook won’t, and YouTube will demonetize a creator for doing pranks that are too dangerous, and TikTok — people accuse TikTok, literally the algorithm of TikTok, of being racist all the time. I hear from the audience all the time that “I don’t know what’s going to happen.
Where are the rules these platforms have to abide by from a baseline?” And I think that’s how you get to this very popular political posturing that we should just impose the First Amendment, and they’ve got to do whatever the First Amendment says, which is somewhat nonsensical.
It’s not somewhat. It is literally nonsense, gobbledygook. It doesn’t make sense.
But I’m very sympathetic to where that comes from, that you’re going to go seek out some other authority that has this spiritual place in American life, and then everyone has to just do that.
Okay. I understand the impulse, but I don’t know how to square it with, these are different companies with different roles, different algorithms. And yeah, if Twitter wants to be a little looser with its nudity standards than Facebook, that is actually a good thing for speech in America.
I understand the impulse. I’m going to be really mean about people for a second. I understand the impulse to stop, to turn off your brain and take the easiest possible solution that gives you absolutely meaningless results and won’t have any type of procedural fairness over time. Sure. That seems great. We did that with newsworthiness. Newsworthiness is a circularly defined concept that people rely on all the time and it actually means nothing. And it’s time we finally started talking about that. And just because the Supreme Court uses the language and circularly defines it still doesn’t mean that it’s not a problem philosophically to rely on that for a standard to police people’s speech on.
I really do understand the notion that people want there to be some answer out there that is going to solve this problem, but here’s the thing. You just were talking about this in the US and the First Amendment. Facebook’s global user base is 7 percent in the US; mostly it’s everywhere else. This isn’t even other people’s standards. Facebook talks about itself as a community. It’s not a community. It is a couple billion communities all overlapping on top of each other, that have almost that necessarily binds them together.
A community is defined by a group of individuals that have a shared sense of norms and values and responsibilities. And there’s no global community that can even agree on whether we should allow female breasts online, let alone whether or not to allow when a Mexican cartel has a beheading, whether or not to let someone put that on a platform or whether it’s too violent or it’s gore, or whether we should do something in between.
This is one of the other things that I’m excited about about this decision is because I think it starts to go to a place that is so much more useful and rigorous than how we’ve been having this conversation for the last 10, 15 years, which is that it is time to stop letting people make these, “Oh, he’s a public figure. Oh, he’s a political figure. Oh, he’s newsworthy,” types of arguments and stop there. It’s time to get to the next level and dig deeper and figure out what it is we value and mean by that. And to your point about all this stuff about TikTok being racist, Twitter making arbitrary decisions, all of this stuff. Those have grounded, intuitive roots in procedural justice and the rule of law that we can start to tie some of these things back to. And if we could start to have procedures around some of this stuff, then once we apply the substantive rules, they won’t seem so arbitrary and capricious and these companies won’t seem so unaccountable.
What’s interesting about that is we keep coming back to the laws and courts, which are fundamentally governmental functions and powers. At least in this country, there’s just not a way to have a speech court like that. So these all have to be corporate powers and enforcement mechanisms.
At least here in America in 2021, I cannot see everyone coming together and agreeing that this corporation, this LLC, has the power over speech on one of the largest platforms, and that its decisions are going to carry the psychic weight of a Supreme Court decision.
Well, to your point, pretty much the US courts have passed the buck on this all the time. They don’t make decisions on the substantive nature of viewpoint discrimination. They just say, “There can or can not be viewpoint discrimination, and this is how we’re going to determine this.” I think that what you’re going to see is that Facebook’s going to still get to substantively decide what its policies are, but they’re just going to have to be fair and consistent and proportionate in how they enforce them. And right now, that is the biggest hurdle, that people who have worked around content moderation are the most upset about; it’s not that Trump comes up or comes down when he incites violence or lauds a dangerous org. It’s that there is a different rule for you and for me, and for Trump and for Alex Jones, and that we don’t know what any of those decisions are, any of those rules are.
People are constantly having unfair outcomes. I think it was an 80 percent error rate on content moderation decisions. I’m just like,”That’s nuts.” Can we just work on getting that lower for a while, never mind keeping Trump down? That’s a lot of people that are censored. At the end of the day, this is really about just establishing some procedures. The substantive decisions, you’re right, everyone’s going to fight about them. No one’s going to be happy about them. There’s lots of laws people don’t agree with now, too, but they feel protected by the fact that there’s transparency and accountability of how they’re enforced. Well, kind of, depending on plenty of other systemic things, but that’s kind of something that I think that we can get into once we have this baseline.
You mentioned the error rate of moderation decisions. Connect this spectrum for me. We’ve done a lot of coverage of individual moderators at Facebook and their working conditions and how they feel, and the fact that they have fundamentally bad jobs and often get PTSD afterwards.
How should a contractor working in a Facebook moderation shop feel about the Oversight Board? What is the relationship that they should have? And what is the relationship they have now?
Oh, I think they should be very excited about this. And in fact, one of the most interesting series of calls I got yesterday, or a text that I got yesterday from people that were inside the company, inside Facebook, were from people, as I would call it, in the factory floor or in the policy shop, that were very happy because they had agitated for these types of changes for a long time and felt like this gave them the clout and authority that they needed to put forward this rigorous, new agenda and not have a series of trying to rework the same terrible, ad hoc rules.
And so I think that that’s going to filter down to content moderators and their job being easier. I will say that I think that it continues to be something that we need to talk about and the fact that we are outsourcing this labor in this way and that we’re using individuals to cleanse things. We still talk about it like there is this…you talk about the algorithm — either from data that people are generating from even being on the site and where their eyeballs are going, to people making the content moderation decisions — I think that probably the algorithm is less sophisticated than we think.
Inside of Facebook, one thing that we’ve heard a lot about is the content moderation shop is connected to their political operation shop; that the lobbyists of Facebook are the people who write the rules for speech on Facebook, and that is deeply problematic.
That’s gestured at in this opinion, but they’ve now kicked it back to that same shop, which has faced any number of controversies over the years. Is that something that Facebook needs to change to make all of this more credible?
I have heard that they’re starting to. I heard this through other people who have heard this, so this is getting deep into kind of hearsay and just kind of rumor mill inside Silicon Valley. But I think that they are definitely wanting to devolve product and policy more distinctly. I think that that’s been happening for a while, and I think the Oversight Board is a huge part of that.
But I think that also, they are trying to get away from the idea that Joel Kaplan is in charge of so much and doing so much and trying to put more onto Nick Clegg. I think that the New York Times article that came out today is kind of part of that. I think that there is a desire to kind of set Nick up more for being a policy head. I mean, I think that’s maybe it’s too little, too late, but I have no idea.
So we’re looking at this whole sweep of the Oversight Board being created, this big decision being referred to it, it asserting itself in saying, “No, you actually have to make a policy,” kicking it back to Facebook. In six months, they’ll send it back to the Oversight Board. What should regular Facebook users be looking for next from this process?
One of the things that’s been lost in all of the nonsense around Trump, and honestly, I know that people are like, “The Oversight Board is a distraction.” I feel like Trump is a distraction. I mean, for always and for all time, he has been a distraction from so much, but also from [Indian Prime Minister Narendra] Modi and from [Brazilian President Jair] Bolsonaro and all of the other leaders that are still in power that are threatening, and still on Facebook. I think that this is just one moment. So I think the next thing that users can expect is that whatever happens coming out of this next six months is going to have a huge impact on other types of world leaders.
The other thing I was going to say is that one of the things that’s been lost in all of the emphasis on the Trump suspension is that in the last couple of weeks, Facebook actually had implemented something that they had promised to do eventually, but we never knew how soon or when, which was to start putting their decisions to keep up content that had been flagged by other users into the jurisdiction of the board on appeal. So this means that if I find something that you said offensive and I flag it to Facebook as being lauding dangerous organizations or inciting violence, and they say, “No, it’s fine. We decided to leave it up,” I can now appeal that decision.
I think that that’s a huge deal, because it puts the board into both the role of being watchers of the censors and now being the censors themselves, basically being like, “No, that speech is too harmful. It has to come down.” I think that that’s actually going to be a really weird thing for a lot of these people to do, because I think a lot of these people are used to being like, “No, that has to go back up for the sake of freedom of expression.” I think it’s going to be a lot harder to take down certain people’s speech when it’s harmful.
Yeah. I think you actually see that in the Trump decision where, as they keep referencing the minority, they keep saying, “The minority would have gone farther.” That balance to me seems incredibly fascinating.
Kate, I suspect we’re going to have you back on the show a lot as the Facebook Oversight Board continues its metamorphosis into something credible. Thank you so much for being on the show.
Thank you so much for having me. Well, it was really fun.
Unless you’re willing to shell out for the Apple Magic Keyboard, Logitech’s Folio Touch might be your best bet for turning your second-gen, 11-inch iPad Pro into a laptop. Right now, both Amazon and Best Buy are taking $30 off the well-regarded keyboard case, rendering it one of the best sales we’ve seen on the device in recent months. In addition to the dedicated trackpad and a fabric-like finish, Logitech’s offering features like backlit keys, iPadOS integration, and more protection than you’ll get from any of Apple’s proprietary offerings. It’s heavy, sure, but the added heft is a small price to pay when compared to the cost of a replacement screen.
With iPadOS integration, space for the Apple Pencil, and a dedicated trackpad, the Logitech Folio Touch represents a solid choice for second-gen iPad Pro users looking for an alternative to Apple’s pricey keyboard cases.
It’s hard to talk about platforming without mentioning the most iconic plumber in existence (and for good reason). Luckily, if you happened to have missed Super Mario 3D World + Bowser’s Fury, Mario’s latest 3D romp for the Nintendo Switch, you can pick up a physical copy of the game for $10 off at Amazon or Walmart. Like the last time it went on sale, today’s deal will likely only be available for a limited time, meaning now is your chance to nab the enhanced Wii U port and its feline-focused expansion before the price jumps up again.
That latest Mario release for the Switch is a polished version of the Wii U classic, Super Mario 3D World, and a new expansion called Bowser’s Fury.
Sony’s next-gen 1000XM4 earbuds may be just around the corner, but if you’re looking to pick up a budget-based pair of fitness earbuds, Amazon, Best Buy, and B&H Photo are all still offering a sizable, 51-percent discount on the Sony WF-SP800N, bringing Sony’s mid-tier wireless earbuds to $98. They don’t sound quite as detailed as the Sony 1000XM3 — nor did they make our list of the best wireless earbuds — but they pack formidable lows and proper sweat resistance, giving them a slight edge over the 1000XM3 at the gym.
If you’re wondering why every company under the sun has released new gaming laptops today, it’s because Intel has announced its newest flagship mobile processors. They’re the newest members of its 11th Gen “Tiger Lake H” series. Asus and Intel have announced the new Zephyrus M16, which will pair the chips with Nvidia’s GeForce RTX 3000 GPUs (up to a 3070).
What’s exciting about the M16 is that it has a QHD, 165Hz display with a 16:10 aspect ratio. 16:10 is highly unusual to see in gaming laptops; it’s more commonly found in business and productivity machines due to the extra vertical space it provides. Asus hopes the new look will help the Zephyrus line reach content creators and other customers seeking a device that can work as well as game.
“It takes gaming laptops to an audience that wouldn’t have gone to a gaming laptop,” says Sascha Krohn, Asus ROG’s director of PC and laptop technical marketing.
The slim-bezeled M16 has been around two years in the making. “It’s really tricky to do a laptop with super slim bezels, because you have to design the laptop around that screen,” Krohn said. The M16 has a 94 percent screen-to-body ratio, meaning it has smaller bezels in relation to its size than the Dell XPS 15 and almost any other consumer laptop on the market. The Razer Blade 15, for comparison, has just above an 80 percent screen-to-body ratio.
Asus’ G-series (including the renowned Zephyrus G14 and Zephyrus G15) will remain the more “mainstream” Zephyrus options going forward. The M16 is more expensive, and the Intel chip enables features that enthusiasts and content creators may value more, including Thunderbolt and Intel’s Quick Sync as well as the 16:10 display.
Intel worked closely with Asus to equip the M16 with a number of modern features, including Dolby Atmos audio with Intel’s Smart Sound Technology Driver and MS Hybrid Mode. Mainly, the company believes its CPUs will provide enough power to take advantage of the 165Hz QHD display, a feat that only really became possible this year.
“We’re really ensuring that we continue to deliver the gaming performance that we had in 10th-Gen, where we outgamed the competition earlier this year, and focused on making sure that your IPC gains and our single-threaded performance is at the level that we expect it,” says Kim Algstam, Intel’s interim GM of premium and gaming notebooks.
Algstam also claims the new Tiger Lake chips will be better at multithreaded workloads and will outpace the competition (read: AMD) on battery life, which is an important consideration for the M16’s target audience. “We’ve spent incredible time making sure that the performance tuning and battery life tuning is up to expectations,” Algstam says. “Customers want to do more than just game. They want to work, they want to do more personal tasks when they’re out and about, and that happens on battery.”
AMD has set a high bar in that regard. The Ryzen-powered Zephyrus G15 and Zephyrus G14 were two of the longest-lasting gaming laptops I’ve ever reviewed. Many comparable10th GenIntelsystems have lasted significantly less time in our testing.
The elephant in the room is Alder Lake, Intel’s next generation of hybrid chips, which are slated for release in the second half of this year. The company called the new line “a significant breakthrough in x86 architecture” at a preview in January. Should enthusiasts wait for that? Algstam didn’t address Alder Lake directly but did give a clear verdict. “I would definitely not wait,” he says. “I would buy today.”
Asus has not yet announced pricing or a release date for the Zephyrus M16.
MSI has announced a number of gaming and creator laptops that include Intel’s brand-new Tiger Lake H processors. The models will be available for purchase on May 16th.
MSI is best known for its high-end gaming laptops, but the company has made a few attempts to diversify its portfolio over the past few years. The manufacturer made a play for deep-pocketed professionals with its Summit Series business line last year, and it also sells some lower-priced models tailored to content creators. The new Creator Z16 is its first attempt to enter the market of premium content-creation machines, targeting customers that MSI bluntly calls “MacBook Pro users.”
There are two Creator Z16 models, with the base model priced at $2,599. Both come with a 120Hz 16:10 touch display with QHD+ resolution, which MSI says will cover 100 percent of the DCI-P3 color gamut. The 16:10 aspect ratio may be a bonus for on-the-go designers and artists since it provides more vertical workspace than traditional 16:9 gaming laptops do. Inside, both models come with an Nvidia GeForce RTX 3060 and 32GB of RAM. You can then select 1TB or 2TB of storage, and either a Core i7-11800H or a Core i9-11900H.
Those on a tighter budget may prefer the Creator M16, which is a lighter-weight version of the Z16. This model also includes a QHD+ display, but its chips max out at a GeForce RTX 3050 Ti and a Core i7. Pricing on that one is still to be announced. The Creator 17, which includes a Mini LED display, has also been bumped up to the new chips (up to a Core i9 and a GeForce RTX 3080).
Aside from the specs, MSI emphasized that its build quality has improved. Representatives told me the Creator Z16 would display the company’s “best build quality ever.”
Alongside its creator models, MSI has specced up a number of its premium gaming rigs. The high-end GE76 and GE66 Raider now have 11th Gen Intel processors up to a Core i9 (paired with graphics up to an RTX 3080) and a 240Hz QHD screen option, as do the GS76 and GS66 Stealth. The GP76 and GP66 Leopard, as well as the GL76 and GL66 Pulse (which are sequels to the GL Leopard line), also have the new chips up to a Core i7.
Closer to the budget end of the market, MSI has released two new entry-level gaming lines, dubbed “Katana” and “Sword.” The company says they feature a brand-new design inspired by the work of Japanese illustrator Tsuyoshi Nagano. (Sword models are white and Katana models are black; the Sword can also currently only be configured with 8GB of RAM while all Katana models have 16GB.) Katana models start at $999, and Sword models start at $1,099.
Lenovo is banking hard on 16-inch QHD displays in the taller 16:10 aspect ratio with its new lineup of Legion 7i and 5i Pro gaming laptops, and I’m all for it. These laptops are a showcase for crisper, more spacious displays that have a fast 165Hz refresh rate and G-Sync support, as well as faster processors by way of Intel’s new 11th Gen H-series CPUs. They’re also among the first laptops announced to support Nvidia’s lower-end GeForce RTX 3050 and 3050 Ti graphics chips, in addition to more powerful GPU options.
The Legion 7i is the flagship and can fit the most amount of power, supporting up to a 165W total graphics power (TGP) variant of Nvidia’s RTX 3080 (16GB) with a boost clock of 1,710MHz. That’s more power-hungry than what we’ve seen in most gaming laptops, so it should, theoretically, allow for some fantastic gaming performance. It can be configured with Intel’s flagship Core i9-11980HK processor, too, one of the fastest laptop chips on the market. The Legion 7i comes with a 300W power adapter, though if you’re doing light tasks (and not gaming), it can also recharge via USB-C at 95W. Lenovo says this model will release in June 2021 and will start at $1,769.99.
Despite a few differences, many of the Legion 7i’s ports and specs trickle down to the lower-end Legion 5i models announced today, including its two Thunderbolt 4 ports, three full-size USB 3.2 ports, an Ethernet jack, and an HDMI 2.1 port for outputting 4K resolution at up to 120Hz in external displays that support it. They also host fast DDR4 RAM clocked at 3,200MHz and NVMe PCIe SSDs, though the maximum capacity varies depending on the model you’re buying.
If you don’t need quite as much power as the 7i offers, the Legion 5i Pro has a similarly fast, tall pixel-dense 16-inch QHD screen with the same 16:10 aspect ratio. It tops out at the Core i7-11800H processor and Nvidia’s RTX 3070 GPU with a maximum TGP of 140W and a boost clock of 1,620MHz, which is still plenty fast. That combination of specs should be sufficient to play most games in QHD resolution at high graphical settings, quite possibly with some ray tracing effects switched on. The Legion 5i Pro will ship in June as well, costing $1,329.99 to start.
The Legion 5i lineup also includes 15-inch and 17-inch variants. The specs don’t spell out all that many differences compared to the 5i Pro, aside from the lack of its 16:10 aspect ratio display. You can still get fast QHD screens with these models, though, and you can configure them with Intel’s Core i7-11800H and the RTX 3070, or save money by knocking them down to the Core i7-11400H CPU and the RTX 3050. Both of these sizes will release in July, and Lenovo says they’ll start at $969.99.
Intel has just announced its new 11th Gen processors for more powerful laptops, and Dell is ready with refreshed versions of its XPS 15 and XPS 17 laptops that add the new chips, along with Nvidia’s latest RTX 30-series laptop GPUs.
The new models are virtually the same on the outside as the more substantial 2020 refresh, which saw the reintroduction of the largest 17-inch size and a redesign for the 15-inch model to better match Dell’s popular XPS 13 design.
But both laptops now offer improved specs, featuring Intel’s 11th Gen Tiger Lake H-series chips, bringing the company’s 10nm process to Dell’s more powerful laptops. Both the XPS 15 and XPS 17 can now be configured with the six-core i5-11400H or eight-core i7-11800H and i9-11900H option. The XPS 17 also adds an additional i9-11980HK option, offering eight cores and a maximum 5.0GHz clock speed for what Dell says is the “most powerful XPS laptop ever.”
There are also new, more powerful GPU options. The XPS 15 can now be configured with either Nvidia’s RTX 3050 or RTX 3050 Ti (with 45W of power), while the XPS 17 offers a beefier 60W RTX 3050 or a 70W RTX 3060 GPU.
Both computers still can be configured with up to 64GB of RAM, with options for either 4K (3840×2400) or FHD (1920 x 1200) panels, although the XPS 15 also has a 3456 x 2160 OLED option. Ports have also been upgraded: the XPS 17 now has four Thunderbolt 4 ports, while the XPS 15 offers two Thunderbolt 4 ports and a regular USB 3.2 Gen 2 Type-C port.
The XPS 15 will start at $1,199.99, while the XPS 17 will start at $1,399.99. Dell has yet to announce when the new laptops will be available.
Nvidia’s RTX 30-series lineup of mobile graphics chips has two new members joining today: the GeForce RTX 3050 Ti and 3050. They sit beneath the GeForce RTX 3060 in terms of specs and performance, with less video memory (4GB) and fewer dedicated Tensor AI and RT cores available to perform ray tracing and handle AI-enhanced effects like DLSS.
Despite this, Nvidia says that the RTX 3050 Ti is capable of going beyond 60fps in games like Call of Duty: Warzone, Outriders, Control, Watch Dogs: Legion, and Minecraft — all with ray tracing settings on. That’s pretty good, considering it’ll show up in gaming laptop starting at $849. The RTX 3050 will appear in laptops starting at $799. We already know that Samsung’s new Galaxy Book Odyssey will feature these graphics chips, starting at $1,399.
There are caveats. To begin with, Nvidia’s benchmark measured this level of performance with graphics set to medium, with medium ray tracing settings enabled, and with DLSS on and set to quality mode. It’s entirely possible that many games set to high graphics settings (and minimal or no ray tracing) might also perform well with the RTX 3050 Ti, but this graphics chip seems best suited for people who don’t mind knocking down some quality settings to get smooth gameplay.
The RTX 3050 Ti serves as yet another flex of Nvidia’s DLSS feature that, with the help of its AI cores, is able to run games faster than the hardware normally could. It does this in supported games by turning down the resolution, then using a trained AI model to enhance the picture quality on the fly without a perceptible (in most cases) difference in how the game looks. It promises big gains in performance with little in the way of disadvantages, unless you’re really dissecting pixels.
Again, this is a great argument in favor of these two GPUs, but it only works if your games have been patched to support DLSS. Control, for example, supports DLSS, but its performance without the feature turned on takes almost a 50 percent hit, running at about 35 frames per second at medium settings, according to Nvidia’s testing. That’s playable, but not particularly fluid, and it may be indicative of the kind of experience you might have when playing graphically intensive games that don’t support DLSS.
The performance charts that Nvidia shared with us only showed data on the RTX 3050 Ti’s performance, not the RTX 3050’s. Given that the RTX 3050 is a notch below the RTX 3050 Ti in terms of specs, you can probably expect performance to reflect that. Still, it should deliver good performance for the expected $799 starting price of laptops into which it will be built.
It’s also important to remember that, like with all other RTX 30-series mobile graphics chips, OEMs are free to tweak the total graphics power (TGP) of each RTX 3050 or RTX 3050 Ti in terms of wattage and clock speed to align with their design goals. The TGP range for these chips can be anywhere between 35W and 80W.
After 17 years, billions of newly hatched cicada nymphs are burrowing up from their earthen lairs right now to party. They will molt, sing, and mate like a society emerging from pandemic with a biological drive to preserve the species. It’s going to get messy and it’s going to get loud… unless you’re using Nvidia Broadcast which is being updated to filter out the sound of cicadas.
Nvidia has been successfully filtering out unwanted background noise from microphones for more than a year now, with impressive results. The latest Nvidia Broadcast 1.2 update now adds profiles to better isolate the sound of cats, dogs, and insects. If thousands of cicadas start emerging in your backyard this week, Nvidia has a profile ready to filter out the lawnmower-like chorus and save your daily Zoom calls.
If you’re lucky to live outside the 15 states where cicadas are rising up, there are some other additions in Nvidia Broadcast 1.2 that will also help with background noise. If you’ve been stuck working from home in an echoey room for the past year, Nvidia says that the latest update will also improve the sound of your voice in rooms with poor acoustics.
Nvidia Broadcast isn’t just all about audio and cicadas, though. Nvidia is also tackling video noise from lower quality webcams that people have been dusting off during the pandemic. This static removal is a beta feature in this update, but it should help improve low-light video calls to generate a cleaner image.
Hopefully this latest update also irons out some bugs that we’ve experienced with Nvidia Broadcast. While the noise removal features work well, the app itself has a tendency to not always work correctly with webcams, crash, or need to be reinstalled.
Nvidia Broadcast’s voice filtering capabilities, known as RTX voice, are available on any GeForce, Quadro or TITAN GPUs. You can download the latest update from Nvidia’s site.
Intel has added five consumer processors and five commercial processors to its 11th Gen Core H-series generation (codenamed “Tiger Lake-H”). Both groups include three eight-core chips and two six-core chips. All of the parts are 35W, save the flagship Core i9-11980HK, which is clocked at 65W. You’ll see them in over 30 upcoming ultraportables (laptops 20mm or thinner) and over 80 workstations.
The company (unsurprisingly) says the new chips will provide significant performance improvements over their predecessors from the 10th Gen “Comet Lake” series. It claims they’ll provide a 19 percent “gen-on-gen multithreaded performance improvement.”
On the gaming front, Intel says the Core i9-11980HK will deliver significantly better frame rates than its Comet Lake predecessor on titles including Hitman 3, Far Cry New Dawn, and Tom Clancy’s Rainbow Six Siege. The company also took aim at its competitors. It claims the 11980HK also beats the rival AMD Ryzen 9 5900HX on these titles and that its Core i5-11400H (meant for thin and light laptops) will outperform the Ryzen 9 5900HS on some and come close to matching its performance on others.
Intel did not make battery life claims in its presentation. That’s a bit concerning because recent AMD-powered laptops have been excellent in that department for the past two years.
In terms of more nitty-gritty specs, the chips will support up to 44 platform PCIe lanes, Thunderbolt 4 with up to 40Gbps bandwidth, discrete Intel Killer Wi-Fi 6E (Gig+), Optane H20, overclocking with Intel’s Speed Optimizer (on some SKUs), 20 PCIe Gen 4 lanes with RST-bootable RAID0, and turbo boost up to 5.0Ghz with Intel’s Turbo Boost Max Technology 3.0.
The commercial chips will support Intel’s vPro platform, which includes a number of business-specific security features and management tools, including Intel’s Hardware Shield (which includes a new threat-detection technology that Intel says is “the industry’s first and only silicon-enabled AI threat detection”), Total Memory Encryption, and Active Management Technology. Intel says its Core i9-11950H will be up to 29 percent faster than its predecessor in product development, 12 percent faster in financial services work, and 29 percent faster in media and entertainment.
Many eyes are on these new chips, as AMD’s Ryzen 5000 mobile series took the laptop market by storm when it was announced earlier this year. Its eight-core chips have shown significant performance gains over previous generations, particularly in multi-core workloads and efficiency. Meanwhile, Apple’s Arm-based M1 chip has put up startlingly good performance numbers while maintaining incredible battery life.
Intel is playing catch-up here, and the Tiger Lake-H chips we’ve gotten to try so far haven’t been astonishing. The lightweight Vaio Z, powered by the quad-core Core i7-11375H, yielded great results on single-core benchmarks but couldn’t hold a candle to Apple’s M1 Macbook Pro in multi-core tasks. On the gaming front, we’ve also tested MSI’s Stealth 15M and Acer’s Predator Triton 300 SE (both powered by the 11375H as well). The Stealth didn’t quite achieve the frame rates we’d expect from a laptop of its price (and couldn’t take full advantage of its QHD screen), and the Predator had disappointing battery life.
I’ll have more to say about these new CPUs when I’ve gotten to test them for myself — hopefully sooner rather than later.
HP’s ZBook workstations are designed primarily with creators and enterprise users in mind. Two of the three new ZBook G8 laptops announced today — the ZBook Fury G8 and the Power G8 — should serve those crowds nicely. But the 15.6-inch Studio G8 is the oddball of the group for a very obvious reason, and I love it so much. It’s a work laptop, yet it has an RGB-backlit keyboard.
It has this colorful keyboard for the reason you might expect: HP apparently hopes you might also want to do some gaming on it. The laptop can be configured with some seriously high-end components, like Intel’s newly-announced 11th Gen H-series Core i7 and Core i9 processors, going up to a Core i9-11950H vPro (2.6GHz base clock, 5GHz boost clock) processor. Impressively, it can house all of that power in a chassis that weighs less than four pounds.
The Studio G8 should shine in the graphics department as well, because it’s configurable with the variant of the Nvidia RTX 3080 that can contain up to 16GB of video memory, currently the most powerful mobile GPU available. Though if you’re concerned more with creative workflows than gaming, opting for the Nvidia RTX A5000 GPU intended for professionals might be the smarter choice.
The ZBook Studio G8 ships with a 1080p IPS display by default, but it can be upgraded to a 4K IPS screen with a 120Hz refresh rate that has 100 percent coverage of the DCI-P3 color gamut; you can also opt for a 4K OLED touchscreen. One other unexpected gaming-focused feature this laptop has is an HDMI 2.1 port, which allows certain configurations of the Studio G8 to display 4K resolution at up to 120 frames per second on external monitors or TVs that allow it.
Would I recommend the ZBook Studio G8 over any newer gaming laptops? Likely not, but it’s tough to say, since we don’t know the price. HP says it plans to release this particular model in July and will share the price closer to that time. The company also says that certain configurations with consumer-grade RTX 30-series GPUs may launch in the second half of 2021. HP’s other ZBook G8 models, the ZBook Power G8 and ZBook Fury G8, will launch this summer.
Razer has just announced new versions of its Blade 15 workhorse gaming laptop, complete with some of the biggest changes to the lineup in some time.
Like many other laptops announced today, the new Blade 15 Advanced features Intel’s 11th Gen H-series processors and Nvidia’s RTX 30-series graphics chips, with up to a Core i9 11900H (2.5GHz base clock, 4.9GHz boost clock), an RTX 3080 GPU with 16GB of video memory (Razer declined to share the total graphics power ahead of publishing), and a 4K touchscreen.
The most welcome improvement might be the new fingerprint-resistant coating making its way to all of these new models. I can’t imagine that it’ll eliminate fingerprints altogether, but this should address one of the biggest annoyances with the prior models. The Windows Hello webcam is getting bumped up to 1080p resolution (from 720p), and Razer claims the trackpads have improved palm rejection.
For the new design, Razer managed to shave off a little more than a millimeter from the thickness of the Blade 15 Advanced, coming in at 15.8mm thick. Razer claims that it’s the smallest 15-inch gaming laptop with RTX graphics and is 17 percent smaller by dimensions compared to the MSI GS66 Stealth. This size reduction applies only to the starting model that has the RTX 3060, though. Thinner might sound more appealing, but it isn’t usually better for gaming performance. Nvidia allows OEMs like Razer to choose the wattage and clock speed of the GPU based on their laptop designs, and generally speaking, the thinner the laptop is, the worse it can be running games compared to thicker laptops that typically allow for bigger cooling systems.
The higher-specced options are thicker than this 15.8mm model, but that’s roughly the same thickness as the previous generation. The width and depth of these machines debuting today are also unchanged from the previous gen at 355 and 235mm (13.98 and 9.25 inches), respectively.
The latest (and thinnest) Blade 15 Advanced starts at $2,299, and this model has a 240Hz QHD IPS panel with 2.5ms response time and 100 percent coverage of the DCI-P3 gamut. It has an octa-core Intel Core i7-11800H processor, the RTX 3060 GPU with 8GB of video memory, and 16GB of DDR4 RAM clocked at 3,200MHz. A 1TB NVMe SSD that supports PCIe 4.0 for faster read / write and transfer speeds and a 80Wh battery come standard across all Advanced models.
The selection of ports across the Advanced lineup is similar but not exactly the same as the models released earlier in 2021. The most notable exceptions are the two new Thunderbolt 4 ports. In addition, you’ll find an UHS-III SD card reader, two USB-A 3.2 Gen 2 ports, a headphone jack, and an HDMI 2.1 port. Aside from that, all new Blade 15 models support Wi-Fi 6E, Bluetooth 5.2, 20V charging via USB-C.
All of the Advanced models also support upgradeable storage and RAM. The starting model has only one M.2 slot because of its thin design, but all other new models have an additional M.2 slot for a total of up to 4TB of storage supported.
Spending more will get you a better screen, processor, and GPU. Below you can see the specs of each option, as well as the most recent version of the prior Blade 15 Advanced.
Nothing, the tech startup from ex-OnePlus co-founder Carl Pei, will reveal its debut pair of true wireless earbuds this June. The company says the earbuds will be called Ear 1, but it’s staying tight-lipped about other details like their specs, price, and final design. However, an illustration released alongside today’s blog post shows what appears to be a silhouette of the earbuds with a rather lengthy stem.
A June product reveal means the company could just about make good on its January promise of releasing its debut product in the first half of the year. However, a release that same month isn’t guaranteed. The company only says that it will reveal the product and announce details on how to order them, but it wouldn’t officially confirm a shipment date.
Beyond headphones, Nothing has said it eventually plans to build up an ecosystem of interconnected devices.
Peak Design, the bag and accessory maker that created one of our favorite backpacks, is launching a new online exchange in the US for people to buy and sell used Peak Design products. The Peak Design Marketplace opened in beta form in March, but today the used gear storefront opens for anyone looking to buy and sell gear…provided it was made by Peak Design.
To sell a product on Peak Design’s marketplace you have to register it on the company’s site (it offers instructions in video form) and provide pictures and details of the current condition of what you’re unloading. Peak Design says it reviews every listing and has the right to approve or deny anything before it shows up for sale. The company also provides a recommended sale price, but you can set it to any amount you choose.
Buyers will be responsible for covering the shipping (Peak Design tacks it on to whatever price is set) and sellers are responsible for shipping directly to the buyer. Theoretically, that direct shipping could also save on the additional cost and environmental impact that comes with shipping to a third location first, which can be required by other secondhand marketplaces and storefronts such as ThredUp and Patagonia’s Worn Wear.
Peak Design is also guaranteeing some basic benefits to used gear — like customer service and a lifetime guarantee — no matter how many times the gear has changed hands. Peak Design’s lifetime guarantee covers manufacturing defects and “Failures or breakages that render part or all of your product to become non-functional,” but not misuse, neglect, or cosmetic blemishes.
The disadvantage of Peak Design’s “Craigslist for camera bags” (besides being limited to one brand of product) is how sellers get paid out. Once a buyer receives the product you sold, they have to confirm that what they received is in the same condition as promised. Once everything’s confirmed, the seller gets paid. Peak Design will let you keep 100 percent of your profits for in-store credit or 75 percent if you want to be paid out in cash. The company says it doesn’t pocket that missing 25 percent, and instead uses it to pay Recurate, the company that helps manage the marketplace and sends out prepaid shipping labels for sellers.
Setting up a marketplace for used gear is a clever idea: it seems like a good-natured ad for the durability of Peak Design’s products, and it adds a way to try them out for a cheaper price than what they would cost new. Peak Design frames it as environmentally motivated as well — fewer vehicles burning fuel transporting products and less unnecessary packaging. The company also claims Marketplace sales are “100 percent carbon neutral,” though not all carbon offsets are created equal (or even used).
Losing 25 percent of what you could earn from a sale is not insignificant and will likely force some people in to taking the store credit or selling elsewhere. But more than that, because of the restrictions, the Peak Design Marketplace is sort of an outsourced version of a traditional trade-in program. You have to do the extra bit of work of actually shipping your things, but you could earn more than the flat fee Peak Design might offer if it was running a a trade-in program itself.
You might know of Oppo as a company opaquely related to OnePlus, but it recently became the leading smartphone maker in China — the beneficiary of Huawei’s sanctions-induced slide in sales even in its home market. As such, the new Find X3 Pro flagship is an important handset for the company, presenting an opportunity to solidify its place as a major player in premium smartphones.
And this time around, Oppo isn’t keeping the Find X3 Pro in China. The company still doesn’t sell phones in the US, but there are international models with Google services available in countries like the UK, where it starts at £1,099 (about $1,500). If you have the option and you’re shopping for a high-end phone, it’s worth checking out, because this is one of the sleekest Android devices you’ll see this year.
From the front, the Find X3 Pro looks almost identical to the OnePlus 9 Pro. Both phones have a 6.7-inch 1440p 120Hz curved OLED screen with a hole-punch selfie camera in the top left; since Oppo and OnePlus share a supply chain, it’s almost certainly the same Samsung panel.
Turn the Find X3 Pro around, though, and it looks nothing like the OnePlus — or any other phone, really. The camera bump is somewhat reminiscent of the iPhone 12 Pro in its arrangement of three lenses within a rounded square, but the bump is part of a single piece of glass that smoothly rises up to accommodate the cameras. Coupled with the mirrored finish, it makes the phone look like something out of Terminator 2 — at least until you get your fingerprints all over it.
The Find X3 Pro feels relatively light and thin for a 2021 flagship phone, at 193g and 8.3mm thick, and the total lack of sharp edges anywhere on the device makes it very comfortable to hold. I’ve also been using it with an included Kevlar-style case that preserves almost all of the device’s thinness, which is a relief because this is one phone I would not want to risk dropping.
Like the OnePlus 9 Pro, the Find X3 Pro has a very good screen. However, Oppo is aiming to differentiate it with what it describes as the first full-path 10-bit color management system for Android, allowing you to capture and view more than a billion colors as opposed to the 16.7 million on other devices. The Find X3 Pro software even includes eye tests to help the display compensate for various forms of color blindness.
I haven’t been able to see a huge difference between this and other displays in general use, but we don’t yet live in a 10-bit world. In all likelihood, you’ll only ever make use of this capability by creating your own content with the Find X3 Pro’s cameras, and even then the advantage is going to seem niche.
As for the cameras themselves, the Find X3 Pro has a respectable array of hardware. The heart of the system is two identical 50-megapixel Sony IMX 766 sensors for the main camera and the ultrawide; it’s not the biggest sensor out there, but it more than holds its own against flagship competitors. Oppo’s color tuning and HDR grading is relatively restrained, and it’s both unusual and refreshing to have an ultrawide that performs just as well in terms of resolution and dynamic range as the primary camera.
There’s also a 13-megapixel 2x telephoto camera, which doesn’t match the other cameras’ performance. This is a little disappointing from Oppo, which did more than any other company to popularize periscope telephoto cameras. Granted, zoom lenses are never the best-quality optics on a smartphone, but this feels like a step back from the 5x unit on last year’s Find X2 Pro. You could make the case that a 2x zoom is more versatile because it improves the quality of shots between 2x to 4.9x, but why not include both?
I can ask the “why not both” question with a reasonable degree of fairness because Oppo chose to allocate a large section of the camera bump to a bizarre microscope tool. The three-megapixel sensor has a lens in front of it that Oppo says is capable of up to 60x magnification, and there’s even a ring light around the glass to illuminate subjects that would otherwise be obscured by the phone’s shadow.
Is this cool? Admittedly, yes. It’s quite difficult to get subjects in full focus because of the extremely shallow depth of field, but you can absolutely capture ethereal images unlike anything you’ve ever seen from a smartphone by holding the Find X3 Pro up to threads, food, or OLED screens. But is it useful? Perhaps this is a failure of imagination on my part, but I think I’d take the periscope zoom. If you can think of a ton of reasons you might want a microscope-class camera on a phone, by all means enjoy this one.
The Find X3 Pro’s overall performance is as good as you’d expect from any other Snapdragon 888-equipped flagship. Oppo’s ColorOS skin is far snappier than it used to be, to the point that OnePlus itself is using it for phones in China now. The 4,500mAh battery doesn’t quite make this a two-day phone, but I never had any problem getting through a single day of heavy use.
Battery life is helped by the fact that Oppo has finally put put wireless charging on a flagship phone. This was by far my biggest complaint about its predecessor — it might not be a big deal for everyone, but if you’re used to wireless charging, the lack of it is absolutely a dealbreaker. Oppo’s 30W wireless system can supposedly charge the Find X3 Pro to 100 percent in 80 minutes, though I don’t have the necessary proprietary charger to test that. The wired charger, meanwhile, is 65W and gets you a 40-percent charge in ten minutes.
It sounds minor, but the addition of wireless charging is really all I needed to be happy with the Find X3 Pro as an everyday phone. I’ve been using it daily for about six weeks now, and I have very few complaints. I could’ve done without the microscope camera, sure, but Oppo is now as capable as any other Android manufacturer at turning in legitimately premium, performant phones.
Don’t think you’ll get it at the sort of brand discount that OnePlus made its name with, though. The Find X3 Pro’s starting price in the UK is just £50 less than Samsung’s Galaxy S21 Ultra. I think the Find X3 Pro is a reasonable competitor to that device, but I can’t see too many people picking it over the larger and more trusted brand.
Still, the Find X3 Pro is an excellent device on merit, and further cements Oppo as a company worth paying attention to. This shouldn’t be surprising, of course, but the best phone from one of China’s biggest players is one of the best phones you’ll find anywhere.
Now, independent filmmaker Ian Padgham has come up with another must-try idea: riding a lawnmower on a closed course, from the perspective of a self-flying drone.
Not that he’s actually playing a game in this awesome video, mind: it’s pretty clear that Padgham just set his Skydio drone to film a normal video of him riding around, and then he likely added a lot of CG in post.
But to quote The Six Million Dollar Man, we have the technology! Savvy programmers could totally pair a headset with a self-flying drone and make this game for real.
As I explained in our Skydio 2 review, the company’s technology has come to the point where I implicitly trust it to follow me without crashing. You don’t need to worry about controlling this kind of drone at all — you’d only need to worry about steering the lawnmower.
Mixed-reality headsets like the Microsoft Hololens have repeatedly and convincingly overlaid CG on top of the real world in real-time (though admittedly only across a narrow field of view). They could certainly turn reality into Mario Kart from a drone’s perspective.
Drones like the Skydio 2 have a surprising amountof processing power inside these days, too.
We just haven’t put them all together yet. Speaking of which, Skydio — when can we expect an FPV headset from you? It’s right up there with “build a smaller folding drone that easily fits in a messenger bag” when it comes to no-brainer moves.
As the trial for Epic v. Apple entered its second week, both parties took a break from antitrust law to argue over whether bananas should wear clothes in court.
The banana in question is Peely, a humanoid fruit avatar from Epic’s game Fortnite. Fortnite, as you may remember, is at the center of the huge lawsuit between Apple and Epic. The trial’s sixth day began with testimony from Matthew Weissinger, Epic’s VP of marketing. And Apple used its cross-examination to offer the court an exhaustive tutorial on Fortnite, beginning with its title screen and one of its skins. Hence the banana:
Apple attorney: We have in front of us a new set of images, and what is this screen showing?
Weissinger: This is your matchmaking lobby.
Attorney: And we have a large yellow banana here, don’t we? In a tuxedo?
Weissinger: Yes. That is Peely.
Attorney: And that’s Peely, did you say?
Attorney: And in fact, in the tuxedo, he’s known as Agent Peely, correct?
Weissinger: That’s correct.
Attorney: We thought it better to go with the suit than the naked banana, since we are in federal court this morning.
Peely’s nightmarish existence is barely related to Apple’s case. And the “naked banana” comment would probably have passed for a throwaway joke, but for one very important fact: Apple slammed Epic last week by claiming that it hosted porn.
On Friday, an Apple attorney went after indie storefront Itch.io, which Epic lets users install through the Epic Games Store. The attorney noted that Itch.io included “so-called adult games” whose descriptions were “not appropriate for us to speak in federal courts,” calling them “both offensive and sexualized.”
Epic Games Store manager Steven Allison defended Itch.io, but the exchange may have stung Epic. Or at least, that’s the best explanation I can imagine for what happened two hours later — when Epic’s attorney decided to revisit Peely during her own questioning of Weissinger:
Epic attorney: A little bit of a digression. We talked about Peely? Our banana? Remember that?
Weissinger: I do.
Attorney: And there might have been an implication that to show Peely without a suit would have been inappropriate. Do you recall that?
Attorney: Is there anything inappropriate about Peely without a suit?
Weissinger: No, there is not.
Attorney: If we could just put on the screen a picture of Peely — is there anything inappropriate about Peely without clothes?
Weissinger: It’s just a banana, ma’am.
This does, somewhat astonishingly, relate to the core issues in Epic v. Apple. Epic is suing to make Apple open up iOS to alternative app stores like the Epic Games Store. Apple claims this would expose users to malicious and low-quality apps. It used Itch.io to paint Epic as a sloppy guardian of its users’ safety, and Judge Yvonne Gonzalez Rogers seemed to take Apple’s concern at least somewhat seriously. It’s unclear whether Rogers actually thought there was a graphically naked banana-person in Fortnite, but Epic’s attorney clearly didn’t want to take that chance.
But the Peely exchange still epitomized just how rambling and off-topic some of today’s testimony felt. Apple’s tutorial was clearly aimed at showing that Fortnite is mostly a game and not an “experience” or “metaverse” — encouraging the judge to weigh the App Store’s game-related policies against similar rules on consoles, rather than scrutinizing the whole iOS ecosystem. Still, the result felt like a college freshman padding an English essay with a blow-by-blow plot summary — or in this case, a blow-by-blow description of how to complete a skydiving challenge.
And despite Apple and Epic’s often very funny debate over the definition of a game, the case will probably hinge on drier-sounding questions like those discussed by Epic’s first expert witness, the economist David Evans.
Evans argued that Apple is running an unfair single-brand monopoly: basically, it sells pricey devices that lock users into an ecosystem with no reasonable alternatives for getting certain apps, beyond tossing their phone or tablet and spending hundreds or thousands of dollars on a new one. Developers can offer cheaper in-app purchases on the web or a different platform, but Apple won’t let iOS apps direct users to these savings.
Judge Rogers asked some skeptical questions about Evans’ testimony, and Epic will almost certainly try to hammer his points harder. Hopefully, both parties will let Peely slip — but who knows, maybe the banana clothing issue will remain a split in the days to come.
Today I learned that Hans Zimmer, known for his brilliant scores for movies such as Inception, Pirates of the Caribbean, and The Lion King, has also composed driving sounds for BMW. And they’re actually pretty good!
You can hear one coming to the company’s M version of its electric BMW i4 cars in this video (and jump to 1:30 if you just want to hear the sound).
And you can hear the driving sound that Zimmer composed for the BMW Vision M NEXT concept car in 2019 in this video:
These are cool, but they got The Verge staff wondering — what other composers should create sounds for cars? So we compiled just a few of our favorite composers into one list, and some ideas about what their car noises might sound like:
John Williams (Star Wars, Indiana Jones, E.T., Jurassic Park): bombastic and inspirational
Danny Elfman (The Nightmare Before Christmas, Men in Black, The Simpsons theme song): spritely vrooming, possibly with a horn section
Clint Mansell (The Fountain, Black Swan, Mass Effect 3): intense, overwhelming, and somehow brings me to tears
Nobuo Uematsu (known for his many compositions to the Final Fantasy series): gets me hyped to defeat some bad guys (after I am finished driving, of course)
Trent Reznor (of the band Nine Inch Nails, and composer for The Social Network, Watchmen, and Soul [alongside Atticus Ross]): ambient, creeping dread
Yoko Kanno (Cowboy Bebop, Vision of Escaflowne): perky, humorous, yet somehow as perfectly matched to the vehicle as a seatbelt
Please, car-makers: have more composers create drive sounds. I just gave you some excellent candidates you can consider.
In fact, tech industry in general, please hire them to make other sounds, too — you don’t always have ask to Zimmer to makethings.
The NASA spacecraft that snatched a sample of rocks from the distant Bennu asteroid last year fired up a suite of thrusters on Monday and committed to its two-year journey back home. The maneuver kicks the minivan-sized spacecraft, dubbed Osiris-REx, onto a winding cosmic path around the Sun and toward Earth’s orbit. When it returns to Earth in 2023, it’ll toss a capsule packed with asteroid samples through the atmosphere somewhere over Utah.
The spacecraft’s Asteroid Departure Maneuver (ADM) was no sweat for the Osiris-REx team, but it marked a significant step towards the return of the first pristine cache of asteroid samples in NASA’s history. Spacecraft engineers inside a Lockheed Martin center in Littleton, Colorado confirmed the seven-minute thruster firing began at 4PM ET Monday and celebrated success shortly after.
“All stations, the ADM burn has completed. We had a nominal ADM burn, and we’re bringing our samples home!” declared Navigation Team Chief Pete Antreasian, prompting applause inside the control room.
Osiris-REx launched from Florida in 2016 to journey over 100 million miles to Bennu, an acorn-shaped asteroid named after a mythological Egyptian deity that symbolized the world’s creation. Scientists hope Bennu, an ancient remnant from the earliest days of the solar system, will hold clues to the origins of life on Earth.
Last year, Osiris-REx entered Bennu’s orbit, becoming the first US spacecraft to circle an asteroid. It gradually approached the space rock’s surface and extended an 11-foot robotic arm with a showerhead-shaped collection device on the end. In a dramatic event that lasted only a few seconds, the sampling head touched down on Bennu’s surface and emitted a blast of pressurized gas strong enough to kick up rocks and asteroid debris to catch them in the sampling head’s container. Bennu’s surface was surprisingly soft, and the touch-and-go maneuver splashed up more rocks than scientists expected. The asteroid scoop was so hearty — collecting about two ounces — that rocks jammed the sampling container door open.
But engineers managed to close up the rock suitcase and stow it safely inside the spacecraft’s capsule. Osiris-REx stayed in Bennu’s neighborhood for a few more months to bask in the cloud of asteroid dust it punched up, and to study the crater it left on the asteroid’s surface. Now, it’s finally on its way home.
“It’s a new chapter in the mission,” says Osiris-REx project scientist Jason Dworkin, who maintains the scientific integrity of the sampling mission, serving as the operational glue between the spacecraft engineering team and the teams of scientists eagerly awaiting the asteroid samples. “I’ve been waiting a long time to get this sample to the laboratory,” he tells The Verge. “I started in 2004. My daughter was in diapers, and now she’s graduating from high school.”
The first thruster burn on Monday was precisely timed to put Osiris-REx in Earth’s path two and a half years from now, a little over 6,000 miles from the surface. The spacecraft will orbit the Sun twice along the way, using its thrusters to intricately nudge itself closer and closer to Earth and tallying 1.4 billion miles total in its return expedition. “This is really the finality — we’re done at Bennu, we aren’t going back,” says Sandy Freund, Lockheed Martin’s Osiris-REx Mission Operations Program Manager. “There’s a little bit of sadness, in that we’ve gotten to know this asteroid, we’ve learned so much. But then there’s that excitement of what we’re going to learn when these samples are here on Earth.”
The spacecraft will eject its dishwasher-sized asteroid sample capsule and send it careening through Earth’s atmosphere for a landing at the Utah Test and Training Range on September 24th, 2023. Osiris-REx will stay in space. If it manages to save enough fuel during its years-long return from Bennu, NASA might assign it a new mission to another asteroid sometime in the future, the agency said in a blog post on Monday.
As soon as it touches down in Utah, NASA teams will carefully transport the capsule and its precious cargo to the Johnson Space Center in Houston, where the agency’s Moon rocks currently live.
Only 25 percent of the Bennu material will be used for immediate inspection by scientists around the world. The other 75 percent will be stored for future scientists, some of whom haven’t been born. The researchers hope later generations can explore the samples using technologies that haven’t yet been invented — an apt way to anticipate innovation and prolong the scientific value of rare cosmic rocks.
“That means every decision I make has the weight of history on it,” Dworkin says. “So I want to make sure that I arm all future scientists with the best tools I can, so they can use the samples as best as possible. That’s one of the things that a project scientist does — they help enable more science to be done than they can personally do.”
“I look forward to in 50 years, or longer, maybe your readers, or your readers’ children or grandchildren, may be inspired to ask new questions with new techniques on these old samples,” he says. “It would be thrilling.”
According to UploadVR, the new headset should have a total resolution of 4000 x 2080, giving each eye 2000 x 2040 pixels (we’re not clear if 2040 is a typo). The original PlayStation VR had a resolution of 960 x 1080 pixels per eye, for comparison, so this would be nearly double. The Oculus Quest 2 has 1832 x 1920 pixels per eye, slightly less than Sony’s headset is rumored to have — in both cases, the high resolution can help to avoid the “screen-door effect” that can often keep VR headsets from providing a clear image.
It is worth noting, though, that unlike the Quest 2, Sony had previously announced the PlayStation headset will still be wired, using a single cable to connect to the PS5 system. It’ll be a USB-C cable, according to UploadVR, which shouldn’t come as a surprise given that Sony conspicuously placed one on the front of the PS5.
UploadVR also has interesting information about how the PlayStation VR successor will fill its additional pixels. Its sources say that the headset will track the users’ eyes so it can do foveated rendering, where the image will only be fully sharp where you’re looking, and be blurrier in your peripheral vision. This simulates how your eyes actually perceive the world and lets the computer (or in this case, console) work more efficiently by not having to fully render things at high resolution that you aren’t looking at anyhow. There are, of course, other neat things you can do with eye-tracking (including creating more lifelike player avatars), but it’s currently unclear what Sony plans to do along those lines.
UploadVR also claims that the next-gen headset will use inside-out tracking, which would certainly be an upgrade from the re-purposed PlayStation Move system of the original, which required a fixed camera that could only properly track your head and hands if you held a pair of glowing sticks in a fairly small area between you and the console. Inside-out tracking typically uses cameras mounted on the outside of the headset itself to figure out where you are inside a room.
If these rumors turn out to be true, it sounds like it could be a promising accessory for those lucky enough to get their hands on a PS5. While it’d be nice to see a cable-free Quest 2 competitor, it’s hard to blame Sony for focusing on something that will compliment its console. Personally, I’m already sold, and am starting to set aside some pennies (okay, more realistically, twenties).
The cyberattack that forced the Colonial Pipeline offline is just one failure to address existing weaknesses and an escalating “ransomware pandemic,” experts tell The Verge. That leaves the nation’s energy infrastructure especially vulnerable, even though there are basic steps that could have been taken to prevent the crisis that’s unfolding now.
“Honestly, I think for anyone who’s been tracking ransomware closely, this really shouldn’t be a surprise,” says Philip Reiner, CEO of the nonprofit Institute for Security and Technology. “This is yet another example of what is really a ransomware pandemic that needs to be addressed at the highest level.”
An escalating threat from bad actors, like the criminal group DarkSide that’s believed to be behind the attack on Colonial Pipeline, is coinciding with more potential weak points in the energy sector’s cyber infrastructure. Reiner says ransomware poses growing risks to critical infrastructure beyond energy, including health care and financial systems. Hackers have targeted tech, too. A subcontractor for Apple was hit with a $50 million ransomware attack just last month. But the energy sector seems particularly vulnerable to all kinds of cyber threats.
“This is the kind of thing that keeps folks like us awake at night,” says Tucker Bailey, a partner and cybersecurity expert at consultancy McKinsey & Company. “We’ve known that the [vulnerabilities] have been there for a while.
Almost half of all the East Coast’s fuel typically travels through the Colonial Pipeline, which has been shuttered since May 7th. The pipeline company’s IT system fell victim to ransomware, a type of cyber attack in which hackers demand payment to bring systems back online. DarkSide also stole data from the company and threatened to publish it online, Bloomberg reported.
The frequency and severity of attacks against utility systems is on the rise, according to the National Regulatory Research Institute. Fifty-six percent of utility professionals surveyed by Siemens in 2019 said they had experienced at least one attack over the previous year that led to an outage or a loss of private information. More than a third of the 796 “cyber incidents” reported to the Department of Homeland Security between 2013 and 2015 took place in the energy sector.
A collision of a couple key factors could drive those numbers up. First, there are more state actors, cybercriminals, and hacktivists targeting critical infrastructure, according to experts. Second, an increasingly digital power sector opens up more opportunities for hackers to attack.
“As everything is becoming more computerized, the controls for our critical infrastructure are also more computerized and steps need to be taken to ensure that they are protected from cyber attacks,” says Leslie Gordon, acting director for homeland security and justice at the watchdog Government Accountability Office (GAO). She says what happened to Colonial Pipeline is “an example of a failure to protect critical infrastructure.”
Companies are regularly failing to practice even basic security hygiene, which leaves critical infrastructure open to attack. Good security hygiene can include relatively simple things like requiring multi-factor authentication, having response plans ready, and keeping backup systems in place. With Colonial Pipeline, failing to keep its network segmented — so that bad actors can’t easily hop from one piece of the system to the next — was a big problem that shows a lack of cyber hygiene, according to Reiner. Colonial’s IT system was attacked, but that was connected to the company’s operating system, so it shut that down, too.
“One of the things we see here is another example of basic steps not being taken in order to secure your systems,” Reiner says. “Cyber hygiene, or the lack thereof, is really one of the greatest causes of cyber crime. It’s not so much that these guys are so good. It’s just people leave very basic things undone.”
President Joe Biden is expected to announce an executive order that could require contractors the federal government works with to take those kinds of safety measures, and last month, the administration launched a 100-day plan to tackle “increasing cyber threats” to the US electric system. It includes working with utilities to build up their capacity to stop, detect, and respond to attacks. The Department of Energy also launched new research programs in March to make the energy sector more resilient to hazards, both physical and cyber.
But a workforce shortage is another lingering problem for the energy sector that could jeopardize those plans. There’s an estimated shortage of 498,480 cybersecurity workers in the US, a 2019 report found. The Transportation Security Administration, which oversees pipeline security, is short on inspectors and lacks a strategic workforce development plan to help it “carry out its pipeline security responsibilities,” a 2018 report by the GAO found. Three years after the agency recommended that the TSA fill that gap, the GAO says that has yet to happen (although the TSA reports that it’s in the middle of completing a workforce plan).
Until these basic problems are solved, the threat of cyberattacks will loom large over the energy system and other critical infrastructure. And while the attacks are virtual, the consequences can be quickly felt on the ground. The longer the Colonial Pipeline stays out of commission, the bigger the risk of gas stations, jet fuel, and even home heating oil running dry. The pipeline company did not respond to The Verge by time of publication but said in a statement that it’s bringing parts of its pipeline online in stages — with hopes that most operations will be restored by the end of the week.
Many Amazon listings for two major electronics sellers have mysteriously disappeared as of Monday afternoon, and it’s unclear exactly what might be going on.
Here are a few screenshots from Aukey’s Amazon page, taken shortly after 12PM ET on Monday. Aukey is a major seller of chargers, portable batteries, and more, but its Amazon page has a whole lot of blank product boxes and products listed as “currently unavailable.”
It’s not clear why Aukey’s and Mpow’s listings have disappeared. Their disappearance is particularly weird when I can see that listings from other big sellers in the accessories space, including Anker, Belkin, RAVPower, and Satechi, appear to be working as normal as of Monday evening.
Aukey and Mpow have not responded to requests for comment.
Amazon would not confirm or deny whether it had removed these items — but it did provide a statement in response to our question that at least suggests these companies may have had some funny business going on:
We work hard to build a great experience for our customers and sellers and take action to protect them from those that threaten their experience in our store. We have systems and processes to detect suspicious behavior and we have teams that investigate and take action quickly.
We have long-standing policies to protect the integrity of our store, including product authenticity, genuine reviews, and products meeting the expectations of our customers. We take swift action against those that violate them, including suspending or removing selling privileges. We take this responsibility seriously, monitor our decision accuracy and maintain a high bar. We have an appeals process where sellers can explain how they will prevent the violation from happening in the future or let us know if they believe they were compliant. Our teams are based in our Seattle headquarters and around the globe in order to provide sellers with 24/7 support via email, phone, and chat in more than 15 languages.
Amazon is rife with scams, and the company puts a lot of resourcesinto fighting them — Amazon just today revealed that it blocked more than 10 billion suspected bad listings before they were ever published in 2020, for example. Aukey and Mpow aren’t exactly small third-party sellers, though, which make their disappearing listings even more mysterious.
Last week, an unconfirmed report from antivirus review website SafetyDetectives apparently revealed how some Amazon vendors figured out a quiet way to solicit and pay for fake five-star reviews — something that’s obviously against Amazon’s rules. While it’s not clear if the missing listings are connected in any way to the report — which does not name vendors who may be participating in the practice — we found twoaccounts describing how Mpow used review-soliciting tactics similar what was uncovered in SafetyDetectives’ article, and XDA Developers journalist Corbin Davenport said that an Aukey standing desk he’s reviewing included a message offering $100 in exchange for an “honest review.”
Walmart appears to be getting into the streaming devices with the Onn Android TV UHD Streaming Device, according to a new listing on its website (spotted by 9to5Google). The company’s new streaming box (in this case, really more of a flattened cube) is currently listed for $29.88 but is also “out of stock” — which could be due to the fact it hasn’t been officially announced yet.
According to Walmart’s page, the Onn Android TV UHD Streaming Device can, unsurprisingly, stream in 4K and play “Dolby audio” — although there are no other specifics shared. The small device runs Android TV, connects to a TV over HDMI, and comes with what looks like a Google Assistant-enabled remote with dedicated buttons for YouTube, Netflix, Disney Plus, and HBO Max.
Design-wise, as 9to5Google notes, Walmart’s streaming device shares a lot of similarities with Google’s developer device, the ADT-3, albeit with Walmart’s electronics brand, “onn,” slapped on top. The remote that accompanies the Onn also seems to be using a new Google design that’s been rumored to come with a host of new Google TV products in 2021.
Walmart has been in a multiyear competition with Amazon over basically all forms of shopping online and off, so muscling into the streaming device market with what’s essentially a Fire Stick competitor shouldn’t be surprising. For Walmart, the streaming space has standardized and streamlined enough that it’s probably comparatively easy to come out with a cheap streaming cube, stock it in thousands of stores, and let customers do the rest.
It’s hard to not put Walmart’s device in the context of TiVo’s apparent exit from the streaming device space. Streaming sticks and boxes are increasingly becoming a game for big companies who can reach the scale and low costs that smaller companies can’t keep up with. They also tend to have a better handle on software — for Xperi, the TiVo brand’s current owner, Google integrating universal search aped the big feature that differentiated it from the competition.
The TiVo Stream 4K launched for $70, only to later drop to $39 in what seems like a bid to compete with the likes of the $50 Chromecast with Google TV, the $40 Roku Streaming Stick Plus, and Amazon’s $50 Fire Stick 4K. Assuming $30 is the Onn Android TV UHD Streaming Device’s actual price and it’s actually worth using, Walmart may be poised to start yet another race to the bottom.
The companies asked the FDA to expand the authorization to include the new ages in early April, just after they released data showing that their vaccine was highly effective for that group. All of the 12- to 15-year-olds given the two-dose vaccine in a small study generated strong antibody response with no serious side effects, and none developed COVID-19.
The United States will likely need to vaccinate kids and teenagers to reach herd immunity and end the pandemic — the expanded authorization is another step toward that goal. Although kids are less likely to get seriously sick from COVID-19 than adults, they can still catch the virus and suffer long-term symptoms. If they’re infected with the virus, they can also pass it along to others, so vaccinating kids is a way to stop them from getting family members or others sick.
Pfizer and BioNTech also have studies in progress testing their COVID-19 vaccine in younger age groups, starting with infants six months old. Moderna, which has a COVID-19 vaccine authorized for people 18 years and older, is testing its shot in children and adolescents aged six months through 17 years old. Johnson & Johnson has plans for similar studies, as well.
152,819,904 people in the US have received at least one shot of a COVID-19 vaccine.
It’s a little-known fact that you can sling a PS5 or PS4 to another room of your house, streaming your games to a Mac or Windows PC, iOS or Android device, or even an old PS4 with Sony’s PS Remote Play app and a good Wi-Fi network. But until recently, you could only remotely control your shiny new PlayStation 5 with the old DualShock 4 gamepad.
I just gave it a quick try myself with an iPhone 12 mini and a recent iPad Pro, and I have some thoughts!
The good news: if the only controller you own is a DualSense controller, it totally works — and your awesome adaptive triggers come along for the ride.
Here is some bad news:
There’s no haptic feedback. Absolutely zero. It feels extremely weird.
Each time you want to switch devices (to your iPhone or back to your PS5), you’ll have to manually pair your DualSense again. That was true of previous pads as well, but I was hoping they’d fix it.
The built-in microphone, headphone jack, and speaker don’t work. The speaker is a serious loss — games like Returnaluse it in combination with haptic feedback to add some serious immersion.
The streaming quality, as always, will depend heavily on your home wireless network. Give it a try, though; it’s free!
You’ll have to decide whether these things defeat the purpose of pairing a DualSense with an Apple device. As far as I’m concerned, I’ll keep my DualSense hooked up to the PS5 where it can actually help me sense things, and use a DualShock 4 (or third-party pad) when I want to stream.
Up until few weeks ago, Apple was selling the Siri Remote, one of the least popular (and most easily lost) remote controls ever made, with a slick glass and metal chassis that’s practically designed to slip behind a couch cushion.
Apple also recently released its AirTag trackers, which use a Bluetooth network of Apple devices and local ultra-wideband tracking to help locate missing items, whether they’re across town or a few feet away in the room with you.
The AirTag tracker has a small speaker, so you can find it buried in a couch or under a pile of throw pillows.
Apple did not include a UWB chip in the new Siri Remote to allow users with recent iPhone devices to track said remote if they manage to lose it.
Apple is a company that produces cases for almost all of its products (including the iPhone, iPad, and AirTag tracker) — but not either iteration of the Siri Remote.
Apple’s official support document for “If you lost the remote for your Apple TV,” advocates that users either use the Apple TV remote function integrated into iOS devices or just buy a new one.
My Apple TV remote is somewhere in my living room, and no, I don’t have any idea where it is.
With all that said, it’s no surprise that enterprising creators are already making 3D-printed templates for a case for the original Siri Remote that allows you to slot in both the remote and an AirTag tracker so that you can actually find it the next time you lose it in a couch. There are already several available from Thingiverse, an Etsy store selling files, and even an enterprising eBay seller that’ll print and ship you one if you don’t have a 3D printer.
To be clear, this is the lamest workaround for the fact that Apple didn’t just put a UWB chip and a tiny speaker in its $60 remote. It can’t be a cost thing: AirTags have one and they only cost $30. Roku has been putting tiny speakers in its remotes to make them easier to find for years. There was even a strange message in Siri that seemed to hint at the possibility of finding a lost Siri Remote using the virtual assistant — but Apple removed the message a few hours later.
As such, I cannot explain why Apple has refused to embark down this mind-bogglingly obvious path. But I am puzzled why Apple isn’t making a nicer version of this exact 3D printed concept (ideally out of nicer, more durable materials that actually match the rest of the hardware and would be more enjoyable to use on a daily basis).
For Apple, it’s a no-brainer to make a case (that adds functionality that Apple should have included right out of the box) that would almost certainly exceed the cost of the remote and require the additional purchase of a different Apple product.
And yet, for all my mockery here, I really can’t find my Apple TV’s remote.
There are several relevant disclosures in the NTSB report. The first is that security camera footage from the owner’s home captured the owner entering the driver’s side door, while his companion got in on the passenger side. The car leaves the house, travels 550 feet before departing the road on a curve, hitting a drainage culvert, a raised manhole, and a tree. The car then burst into flames, killing the two occupants.
The second relevant finding is that NTSB crash investigators tested whether Tesla’s advanced driver assist system Autopilot would work on the part of the road where the crash took place. There has been much speculation from Tesla’s many interested online observers as to whether Autopilot would even function on the road near the crash site.
Using Autopilot requires both the Traffic-Aware Cruise Control (Tesla’s brand name for its adaptive cruise control function) and Autosteer (which assists in lane departure and centering) to work. According to NTSB, Traffic-Aware Cruise Control could be engaged but not Autosteer.
Tesla claims that its own data suggests local officials were mistaken when they reported that the car crashed without someone in the driver’s seat. The company’s executives have stated that the steering wheel was “deformed” and the seatbelts were buckled, leading them to conclude that someone was behind the wheel.
There was some limited data recovered from the crash. NTSB said the fire destroyed the onboard storage device located in the vehicle’s infotainment console. The restraint control module, which records data associated with vehicle speed, belt status, acceleration, and airbag deployment, was recovered but was also damaged by the fire.
The board likely will not issue its final report on the crash this year. By comparison, the NTSB’s investigation into a California man’s death while using Autopilot in his Tesla Model X took two years to complete.
The crash took place on Saturday, April 17th, in Spring, Texas. According to KHOU in Houston, investigators at the scene were “100 percent certain” that no one was in the driver’s seat at the time of the crash. Minutes before the crash, the wives of the men were said to overhear them talking about the Autopilot feature of the vehicle, which was a 2019 Tesla Model S. The two victims were identified as Everette Talbot, 69, and William Varner, 59, a prominent local anesthesiologist.
Several Apple suppliers may have used forced labor in China, according to The Information. Working with two human rights groups, the publication identified seven companies that supplied products or services to Apple and supported forced labor programs, according to statements made by the Chinese government. The programs target the country’s Muslim minority population, particularly Uyghurs living in Xinjiang.
Six of the seven suppliers were said to participate in work programs operated by the Chinese government, The Information reports, which human rights groups describe as frequently offering cover for forced labor. Workers can be jailed for refusing to join the work programs, the report says, and those enrolled in the programs are often moved far from their homes.
One of the suppliers operated in Xinjiang, the region of China predominantly populated by Uyghurs and where the most egregious human rights violations have reportedly taken place.
The companies supplied Apple with antennas, cables, and coatings, among other products and services, according to The Information.
Apple “found no evidence of forced labor anywhere we operate,” a spokesperson told The Information. Apple said it looks for forced labor as part of “every assessment” it conducts. “We will continue doing all we can to protect workers and ensure they are treated with dignity and respect,” the spokesperson said.
The problem is not Apple’s alone. The tech industry at large relies on suppliers in China, and The Information reports that these companies have also worked with Microsoft, Amazon, Google, and Facebook, among others. (Amazon and Facebook told The Information they wouldn’t work with suppliers using forced labor; Google and Microsoft didn’t respond.)
China’s forced work programs have been getting more attention over the past year, with new reports speaking to the growing scope of China’s oppressive practices in Xinjiang. BuzzFeed News reported finding more than 100 detention facilities located beside factories. In January, the Trump administration said China was “committed genocide against the predominantly Muslim Uyghurs and other ethnic and religious minority groups in Xinjiang.”
Apple’s supply chain has previously been linked to forced labor in China. The Tech Transparency Project said a glass supplier was using forced labor in December; Apple said it had seen no evidence of forced labor. In March, Apple cut ties with another supplier over allegations it was connected to coercive government labor programs.
The assumption is that Darkside is not nation state affiliated, but like oh-so-many ransomware groups it uses tools like “GetUserDefaultLangID” to perform language checks. If the victim uses any languages below, DarkSide moves on. https://t.co/atMjKSPAJlpic.twitter.com/LNJ0CBDdBo
According to The New York Times, the 5,500-mile-long Colonial Pipeline is responsible for carrying 45 percent of the fuel for the Eastern US, including jet fuel and gas. The company that runs the pipeline has put out a statement saying that it’s currently bringing parts of its system back online, after halting all operations due to the cyberattack. Colonial Pipeline says its goal is to restore service by the end of the week.
Facebook has announced on Twitter that it will start testing a pop-up that asks users if they’re sure they want to share an article that they haven’t opened. The pop-up will prompt users to read the article, but they can also choose to continue sharing it if they want. Facebook doesn’t say how wide the test will be.
The pop-up is similar to Twitter’s, which it started testing in June 2020. Twitter’s implementation of the feature can be annoying to users (myself included) who have read the article elsewhere and decide to share it when they see it pop up on their feed.
Facebook says the pop-up is meant to help people be more informed about the articles they share, likely as an attempt to combat the spread of misinformation the platform has struggled with in the past. As the message warns, not opening the article can lead to “missing key facts,” with headlines often not telling the whole story.
Starting today, we’re testing a way to promote more informed sharing of news articles. If you go to share a news article link you haven’t opened, we’ll show a prompt encouraging you to open it and read it, before sharing it with others. pic.twitter.com/brlMnlg6Qg
LiveWire, the first electric motorcycle from Harley-Davidson, will now become its own standalone brand. The Milwaukee-based company announced that it would be spinning out its electric motorcycle division as its own company with a distinct lineup and a tailored retail experience. Harley-Davidson plans to unveil the “first LiveWire branded motorcycle” on July 8th to coincide with the International Motorcycle Show.
It’s a similar move to how Harley-Davidson approached its new electric bicycle company, Serial 1. The idea is that LiveWire continues to benefit from its relationship with its parent company while also forging its own brand identity that is distinct from Harley-Davidson.
It’s a shift from how the company approached its current LiveWire model, in which the Harley-Davidson logo is front and center, while the LiveWire brand is practically nonexistent. And it’s a sign that going forward, Harley-Davidson is interested in letting its electrified models stand on their own terms.
“With the mission to be the most desirable electric motorcycle brand in the world, LiveWire will pioneer the future of motorcycling, for the pursuit of urban adventure and beyond,” Jochen Zeitz, chairman, president, and CEO of Harley-Davidson, said in a statement. “LiveWire also plans to innovate and develop technology that will be applicable to Harley-Davidson electric motorcycles in the future.”
There’s a new logo and a new “virtual” headquarters, with engineering teams stationed in Silicon Valley and Milwaukee. LiveWire will work with Harley-Davidson dealerships as an independent brand, with a blend of digital and physical retail formats.
The question is whether this branding strategy will lead to better sales for Harley-Davidson’s electric models. Last year, Reuters reported that the company’s plan to appeal to a younger generation of motorcyclists with the LiveWire was struggling, with most of the preorders coming from older or preexisting customers. Harley-Davidson’s overall sales have been stagnant lately after hitting a low point in 2018.
The problem could be that the price, which starts at $29,799, isn’t that much cheaper than a Tesla Model 3. Dealers told Reuters that many younger customers were turned off by the price tag.
Harley-Davidson is also facing some stiff competition from rivals like Zero Motorcycles, which recently teamed up with power sports manufacturer Polaris on a new lineup of electric-powered ATVs and snowmobiles. And as the broader world of transportation shifts to electric, the company will be under increasing pressure from its shareholders to prove its vision for the future can be profitable.
YouTube has run the numbers and discovered what greeting its vloggers most use to open videos: “Hey, guys.” Those findings may not be surprising if you’ve watched a lot of YouTube, but the report actually shows a lot more, including what the runners-up are, how greetings have changed in popularity over time, and how video genre affects the most common openings. Let’s just jump into it!
To get these findings, YouTube did quite a bit of data analysis, looking at the auto- and creator-generated captions from over a million videos. YouTube picked videos that had over 20,000 views from channels that had over 20,000 subscribers, so it is worth noting that the results are more reflective of how relatively popular videos open, rather than of every single video on the platform.
YouTube also breaks down how creators of different genres open their videos. For example, sports videos are way more likely to start with “What’s going on?” while travel videos start with a “Good morning” 9 percent of the time. “Hey, guys” comes in second for opening tech videos, with “Ladies and gentlemen” barely edging it out (though I do prefer Tom Scott’s “Ladies, gentlemen, and all in between”).
The article also takes a look at international greetings from Brazilian, French, German, and Mexican creators. Surprisingly, only one has a phrase in the top spot that translates to “Hey, guys,” making it seem like more of an English-speaking opening than a global one.
To play around with the interactive timeline of greetings for yourself (and get a look at some famous channel openings from the infamous “Hey VSauce, Michael here” to “What’s up, Greg?”), you can head over to YouTube’s article. Perhaps it’ll do a breakdown of how people end their videos next so I can get a good idea of how I should be ending these blog posts.
In an open letter today, the National Association of Attorneys General called on Facebook to abandon plans for an Instagram platform focused on children under the age of 13. The letter is signed by 44 different state-level attorneys general (including non-states like Guam, Puerto Rico, and the Northern Mariana Islands), representing a majority of US territories.
“It appears that Facebook is not responding to a need, but instead creating one, as this platform appeals primarily to children who otherwise do not or would not have an Instagram account,” the letter reads. “The attorneys general urge Facebook to abandon its plans to launch this new platform.”
Plans for the child-focused Instagram were first reported by Buzzfeed News in March, and subsequently confirmed by the company. But while internal emails reported by Buzzfeed identified the project as a company priority, Facebook insisted at the time that there was no specific timeline for release.
While the letter has no formal legal power, it emphasizes the significant legal risk Facebook will face in undertaking the project. In the US, children under 13 are subject to enhanced legal protections under the Children’s Online Privacy Protection Act (or COPPA), which places particularly stringent rules against data collection.
Social networks have traditionally complied with the act by banning users under the age of 13, but it has not entirely protected them from regulatory action. Most recently, Google agreed to pay $170 million after a Federal Trade Commission investigation about the company’s data collection from YouTube videos featuring children’s content.
State attorneys general have been particularly active in the enforcement of COPPA protections, so the NAAG letter carries with it an implicit legal threat: if Facebook proceeds with its plans for an Instagram for kids, these same attorneys general will be watching very closely for COPPA violations and will be eager to file suit over any violations they find.
Reached for comment, Facebook said it would not sell ads on any Instagram app targeted at young children but did not back off on its interest in developing the app.
“We’ve just started exploring a version of Instagram for kids,” said Facebook policy representative Andy Stone. “We agree that any experience we develop must prioritize their safety and privacy, and we will consult with experts in child development, child safety and mental health, and privacy advocates to inform it. We also look forward to working with legislators and regulators, including the nation’s attorneys general. In addition, we commit today to not showing ads in any Instagram experience we develop for people under the age of 13.”
Photo-Illustration: by Vulture; @contrachloe; @donte.colley, @chaeyeonbot
They say good editing goes unnoticed. Online, it goes viral. It’s clear in the best internet videos: Editing defines the aesthetic, humor, and power of online storytelling. None of the genre’s inherent absurdity would click into place without an editor’s eye for a perfectly devastating zoom, a video cut short a millisecond too early, or a freeze-frame right at the moment of climax, with text overlaid to really underline the point.
More From This Series
And yet, the internet video has long lacked definition as a discrete genre, with its own tropes, techniques, and history. Like any art form, this one has been shaped in part by the technology available at the time. In compiling this list of influential video edits, we began in the last days of YouTube’s monopoly, shortly before the birth of the now-deceased app Vine. The online video has, of course, existed for decades, but it was the smartphone — and the proliferation of apps to come out of it — that made editing more sophisticated and more accessible to creators than it had ever been. Suddenly, anybody could shoot and edit a video, building the vocabulary of what that could look like: transition videos, lip syncs, and green-screen-driven storytelling began to cohere as distinct subgenres. That’s only accelerated in the age of TikTok, an app that offers more and easier editing tools for users than any that came before it.
Online video is an inherently communal form; it’s defined by thousands of people iterating on the same idea. Every once in a while, though, there’s a leap forward. Every video on this list represents an evolution in the form or exemplifies a particularly influential editing style — whether the creator was one of the first to attempt it, or just pulled off a jaw-dropping editing feat all their own.
Graphic: Ton Do-Nguyen
Lip-syncing is everywhere now, thanks to TikTok and its precursors Musical.ly and Dubsmash, which had special features to make creating a seamless lip-sync a hell of a lot easier. But this particular one, a shot-for-shot recreation of Beyoncé’s “Countdown” video, was made before all that. A masterpiece made by and starring then-16-year-old Ton Do-Nguyen, it combines his flawless lip-sync performance with key editing elements we still see over and over in modern viral content, achieved with a digital camera and the editing program Vegas. The bulk of the video is shot in landscape, but Do-Nguyen integrates vertical shots throughout the video — particularly innovative in a time when many hadn’t accepted that the typical way people hold their phones is the easiest way to film with one. There’s a shot panning across a half-dozen vertical frames of Do-Nguyen dancing that looks like it could have been made in 2021 (probably using Trio, the TikTok filter that gives you a cohort of backup dancers who are just duplicated versions of yourself). And then, of course, there’s the Snuggie Do-Nguyen wears throughout: One TikTok trend last year involved recreating album covers using household items. The “Countdown” Snuggie would have worked perfectly, nearly a decade later. —Madison Malone Kircher
Comedian Atsuko Okatsuka once summarized her Twitter video style as “Okay, here’s the weird part. Good-bye.” It’s a perfect description for the videos, now common across Twitter, TikTok, and Instagram, in which the clip ends right as the action hits. That feeling of What the hell did I just watch overlaps with Well, I simply have to watch that again — a winning combination for both creator and audience. “Back at it again at Krispy Kreme,” a micro-video from Vine, is the platonic ideal of this technique, which was popularized on the late app. In the clip, a guy holds up a Krispy Kreme hat to the camera; says, “Back at it again at Krispy Kreme”; and does a back handspring, knocking a sign off the wall. Except you don’t really see the sign fall off the wall. You see the handspring and the initial crash of body and neon and then black. That’s all you get. It’s impossible not to watch it again. —MMK
Sometimes you just need a little emphasis. The extreme zoom is one of the easiest and most effective editing tricks and a fixture across content platforms. It can be used to subvert expectations and emphasize a reaction. The camera moves in and establishes, or otherwise breaks, the fourth wall — similar to the cinematography of mockumentaries like The Office. This wordless 2015 Vine by the creator who now posts under @francesformayor was one of the first to become popular: Dancing to a-ha’s “Take on Me,” she whips her face around to reveal a mouth full of braces and an inscrutable smile. In 2016, Snapchat made the editing effect ubiquitous by adding a one-finger digital zoom. It follows your thumb as you record (instead of requiring a second hand to pinch the screen), allowing for spontaneity. Four years out from the death of Vine, TikTok also offers a one-handed option and even has a face-zoom effect that uses facial-recognition software to automatically hone in, kicking off several viral trends — not to mention the career of TikToker Bella Poarch, who uses the feature to make expressive lip syncs. —Zoë Haylock
A YouTuber was trying to make a web series when he accidentally made one of the greatest Vines of all time. It was 2015, and during a man-on-the-street segment, he walked up to a kid, stuck a microphone in his face, and asked, “Who’s the hottest Uber driver you’ve ever had?” The kid, mishearing “Uber driver,” responded, “I never went to Oovoo Javer.” What makes the “Oovoo Javer” video funny isn’t the mixup, but the way the editor freezes the frame, adds early-internet text on screen, zooms in, and sets it to a piece of plucky, upbeat stock music. The vibe is public access–style irony, vaporwave without trying too hard.
The freeze-and-zoom-in edit is an extension of a similarly beloved internet video edit: the cut to black at the height of the narrative that allows the viewer to simply imagine the rest of the video. Unlike the cut to black, the freeze-and-zoom-in edit lingers on the very best moment — which is essentially the basis of TikTok’s wildly popular “Oh no” trend, in which users edit videos of themselves about to get hurt and freeze before the viewer can see it. “Oovoo Javer” could be considered the original “Oh no” moment. —Rebecca Jennings
Gen Z may have retired the reaction GIF, but reaction videos are still a fixture of internet culture. A quick cut from one video to another — a juxtaposition easily achieved with most editing programs, including the ones built into TikTok — uses the same logic as images posted side-by-side on Tumblr or Twitter: Putting unrelated images next to each other can tell a story or land a joke. “Two shots of vodka” is the ultimate insert-your-reaction video. It takes a clip from Sandra Lee’s cooking show, Semi-Homemade, in which the host obviously pours more than the “two shots of vodka” the recipe calls for. In some versions, the suspense of watching the shots stream out of the bottle is emphasized: The editor might make the pour louder or loop the clip. But the original footage of Lee’s knockout serving alone is enough to indicate what’s going to happen next. The cut facilitates a thrilling millisecond of recognition before the reaction clip comes in and says it all — that was too much vodka. —ZH
A style of lip-syncing videos came out of the app Musical.ly in the mid-2010s. It involved hand choreography accompanying jerky camera movements that emphasized the beats of the song. And, crucially, the app made it possible to sing along to a song in slow motion, then automatically speed up the footage. It gave the whole thing an energetic feel and allowed users to create clean, smooth transitions. Ariel Martin, whose username is Baby Ariel, was an expert in the form. Known for her buoyant facial expressions and hand motions, she became one of the app’s first breakout stars by busting out Musical.lys daily. The app was eventually bought by TikTok, which still allows you to choose the speed of your sound while you film, allowing for precise choreography (even when that choreography was really just striking different poses). Some techniques that Baby Ariel helped popularize — shaking the camera, swinging it back and forth, and choreographing moves that match the lyrics — are prevalent in TikTok dances today. —ZH
Approximately 7,000 years ago, in 2015, Logan Paul hadn’t yet become the guy known for things like vlogging a dead body in Japan or platforming Alex Jones. Back then, he was just an up-and-coming Vine star building a name for himself with stunty gag videos and flexes directed at an audience of predominantly young female viewers. In one of those early outings, “Kitty Cat Car Jump,” Paul appears to dodge speeding cars on a freeway to rescue a kitten. Writer Caroline Moss, who profiled Paul that year, says it was created with a combination of freeway footage and a greenscreen — a proper cinematic action scene in six seconds. It’s a testament to the creativity of early Viners, who were able to do so much in so little time. Nowadays, TikTok makes it easy to do modest green-screen work with a built-in filter, but this Vine is no crude effort; it’s technical. Videos like “Kitty Cat Car Jump” made the later era of messy content, like Emma Chamberlain’s, that much more of a 180. —MMK
Around the same time wacky face filters were a de-facto feature of every social media app, there was the “That’s my opinion!” Vine. The video, popularized by iconic Viner Quenlin Blackwell in 2015, is a six-second clip from the season nine reunion of The Real Housewives of Orange County in which Vicki Gunvalson defends her possibly cancer-faking boyfriend to her former best friend, Tamra Judge. “How do you know what’s good for me?” Gunvalson shouts. “That’s my opinion!” screams Judge. (Bonus: the stunned expressions of castmate Shannon Beador and host Andy Cohen.) This is all standard fare for a Bravo reunion, but the Vine-ified version adds filters that make it appear as though every person’s face is melting like a Dalí clock, with big bug eyes and stretched-out foreheads, their voices dropped to an uncannily deep octave.
The distortion filters used on the housewives — which appear to be the same ones that have come standard issue with Apple’s Photo Booth app since the mid-2000s — seem crude to our contemporary eyeballs. But the legacy of ironic, funhouse facial distortions is still all over the internet, from PewDiePie YouTube thumbnails to edits skewering Drag Race contestant fights. Automatic distortion and facial recognition has become far more sophisticated in the years since, so much so that beauty filters are influencing plastic-surgery trends. Making your face look weird (or gorgeous) has been an integral part of self-presentation online ever since Snapchat made filters mandatory for any camera app worth using. The more interesting use, though, is in the millions of videos where people put on filters in order to play multiple characters, allowing them to control a narrative while still leaning on the comedy of an exaggerated face. —RJ
The 2007 film Bee Movie’s runtime is certainly longer than five minutes and 29 seconds, but this YouTube edit of it gets you from start to finish in just that. The concept is simple: Every time somebody says “bee,” the clip speeds up by 15 percent. The word is used twice in the prologue narration alone, so the characters already sound like cartoon chipmunks by the time they start speaking. It’s a piece of surrealist art, using a film that’s already about the relationship between a talking bee and a human. (Call it “beestiality.”) This video didn’t invent the concept — an earlier version, which used just the trailer for Bee Movie, also went viral — but it helped establish the idea in the world of internet-video editing in perpetuity. You can now find sped-up versions of everything from Star Wars to Ariana Grande songs. In each, the speed editing becomes the joke, and there’s a satisfaction to the consistency of knowing exactly how the video will play out. It feels similar to a more recent video edit trend on TikTok called “Poland is everywhere,” which involves manipulating the colors on a tiny slice of any video to reveal the red and white of the Polish flag. Speed editing created an umbrella category for very literal editing techniques where a general rule is applied consistently to video content. —MMK
In 2017, the “Karma’s a bitch” challenge swept across the Chinese video platform Douyin (a TikTok predecessor that’s also owned by TikTok’s parent company and is currently only accessible to people in mainland China). Lip-syncing to audio from a Riverdale fan edit, of all things, participants start the video dressed in nothing special, faces unmade. They mouth, “Oh, well. Karma’s a bitch” — then, usually with the wave of a scarf or bathrobe in front of the camera, reappear looking hot, with a new outfit, perfect makeup, hair done, a filter to make their skin look extra smooth, and perhaps a slow-motion effect to heighten the drama. Its predecessor, Vine’s “Don’t Judge Challenge,” came a few years earlier and involved teens making themselves look intentionally bad before revealing their hotter alter-egos. With “Karma’s a bitch,” the transitions become slightly cleaner, similar to the reveal videos we see on TikTok today. The devices change but the general concept remains the same: a seamless transformation from one look to another. The fun is watching on repeat trying to find a glitch in the matrix, a visible rip in the transition. The best edits render this task fruitless. —MMK
You ever scroll by a video that seems like it’s just going to be someone taking a video of themselves, but then it suddenly looks like they peeled off their own face or disappeared into a mirror, and you’re like, Wait, what just happened, and why can’t I stop watching it? You can thank Musical.ly for those. Pretty much everyone on that app tried hypnotic transitions — a surprisingly lo-fi method wherein you film for a few seconds, pause, then position yourself and your phone so that the transition looks cool and repeat as necessary. But one of its true masters was then-teenager Isaiah Howard, who was known for his impossibly intricate editing, and who first went super-viral on his 60-second video set to the song “Addicted to My Ex,” which took seven hours to film. Since then, the torch has been passed on to TikTokers, who have expanded the genre with a whole bevy of visual tricks (like this one, where the user takes off his own head and spins it in the air). Some are done with clever camerawork, like Howard’s, while others are edited using desktop tools like Premiere Pro. —RJ
“Shooting Stars” emerged during a transitory period. It was January 23, 2017 — less than a week after Vine had ceased operations and more than a year before TikTok would launch in the U.S. There was no default platform for super-shortform videos. And yet, life found a way. The first version of the meme, a video titled “Fat man does amazing dive – Shooting Stars,” was uploaded to YouTube by a user named All Ski Casino and repurposed a clip that showed … well, you can probably figure it out. The edit caught the attention of the r/videos subreddit, where it quickly spawned hundreds of imitators — including a version in which Nicki Minaj shoots off to Prague.
The structure is easy to grasp. Take a clip of someone falling or spinning or generally goofing it. Then, at the exact moment of maximum goofage, freeze the video, extract whoever is goofing, and show them floating through trippy visuals while blasting the Bag Raiders song “Shooting Stars.” In 2017, one would have needed basic knowledge of a program like Adobe After Effects to make these videos; now, the meme feels like the prototype for the TikTok filters that let you effortlessly stencil out a video’s subject and change their surroundings — such as Green Screen, which replaces the background. That’s the real legacy of “Shooting Stars.” —Brian Feldman
You’d be hard-pressed to scroll through Twitter or TikTok without eventually stumbling upon a fan edit, a montage put together from clips of a celebrity looking particularly attractive or talented. While fanmade videos have been around for years, the viral 2018 “Chaeyeon Tingz” by Twitter user @chaeyeonbot pushed the format in a shorter, snappier, and more shareable direction. The video pairs photos and videos of South Korean singer and actress Chaeyeon with the confident sound of Nicki Minaj’s single “Barbie Tingz.” Using rapid transitions, the 29-second clip packs in a visual résumé of Chaeyeon’s commercial success, a video-game-style fight sequence where she knocks out hate comments with the power of a pretty face, and a tongue-in-cheek slideshow that includes many clearly fake pictures of her with other celebrities (“Yup, him too, he would still wife me”).
The key here, as in most fan edits, is timing. Every movement — dancing, winking, a headline popping up on-screen — is meant to match the music, which gives the final product high replay value (the same reason that “Beyoncé always on beat” fan edits, which pair footage of Bey dancing with songs from different artists, are so satisfying). Circulating on Stan Twitter, “Chaeyeon Tingz” birthed a trend that lasted over a year as other K-pop fandoms applied the format to their faves. While the original video and most of its derivatives have been taken down for copyright infringement, it’s still fondly remembered as an icon among fan edits, which are now dedicated to everyone from late-night hosts to Hollywood stars. —Jennifer Zhan
Gag dubs were an early and prolific trend in YouTube comedy videos, not least of all because the technological barrier to entry was so low — anyone could mute a TV clip and dub their own audio on top. (Plus, YouTube generally couldn’t strip the audio from your post on copyright grounds if you redubbed it yourself.) The result was viral videos from creators like Jaboody Dubs and Bad Lip Reading, who applied comedic voiceover to footage from infomercials, sports broadcasts, and news. Vern Hass, known online as @vernonator6497, cites old Billy Mays gag dubs as an inspiration behind his YouTube favorite “Wendy Williams except there’s no talking,” which takes clips of The Wendy Williams Show and renders them creepy and unfamiliar by wiping the soundscape of background noise. Instead of music and cheers, claps, and shouts from the audience, we see Wendy smack her lips and hear the sound echo through the cavernous studio. We hear an audience member shift in their chair. We hear a cell phone go off. This video presaged a trend of creators on YouTube and Twitter taking silent edits a step further and dubbing over famous reality-TV fights entirely with whispers (ASMR, weaponized) and influenced how some people record their own, first-person content — as in the TikTok trend of applying Auto-Tune to your voice while recounting an embarrassing anecdote, adding an extra layer of warped hilarity. —Rebecca Alter
Emma Chamberlain is the influencer who made it cool to not appear perfect online. The vlogger — who got her start on YouTube in 2017, when she was 16 years old — helped popularize a self-referential video-editing style that seems effortless, like she really just sets the camera to record and gives her viewers whatever happens. This is, obviously, not true, but the finished product makes you feel like Chamberlain leaves nothing out; she goes out of her way to include flubs, grossness, and goofiness. (In this video, she explains her recent bout of diarrhea.) She’ll label these scenes in her videos “me editing,” a caption that signals to her viewers that this is the real her, the messy her behind the scenes who was in charge of editing all her own content (up until recently, when she hired an editor to help her out). In 2019, the New York Times described her editing style as “instinctual”: “zooming, adding text to the screen and pausing to point out the best parts.” It’s a tactic Chamberlain says she honed in on because it was what made her friends laugh. Editing her content in a way that shows “flaws” and paints a “relatable” portrait is no more or less calculated than the content produced by creators who go a more manicured route, but by choosing to use imperfection as her filter, she inspired a wave of copycats. (YouTube search “vlogging like Emma Chamberlain.”) Chamberlain’s impact is about more than being a person who doesn’t edit out her burps or FaceTune her zits, though. You don’t get Charli D’Amelio filming TikTok dances wearing sweats in a messy bedroom without Chamberlain laying the groundwork. —MMK
Professional dancer and original Viner Casey Frey is known for editing together narratives featuring himself playing numerous, equally ridiculous characters. In his earliest hit, 2016’s “bad boi’s,” he stars as the titular, thirst-trapping bad boy and the girl he’s flirting with. He eventually moved toward longer, more complex Instagram videos that allowed him to better merge his dance skills with his penchant for absurdism. His opus came in 2019 with the video “Get tf out of my way type way,” set to the track “GOMF” by DVBBS, in which Frey encounters a bully (played by Frey), from whom he is saved by a third character (played again by him) when he inspires him to dance. While his noodle-y dance moves are great, the genius lies in the editing — it creates a narrative climax in which the two Freys sync up their choreography, lending the whole thing an uncanny quality. Is it a metaphor for the battle between the superego and the ego or, as some viewers have theorized, a Marxist manifesto of the TikTok age? Who knows! Whatever it is, the ridiculousness transcends: “Get tf out of my way type way” has gone viral multiple times, inspiring its own TikTok challenge and launching thousands of memes. The conceit of one person playing multiple characters is one that apps like Vine and TikTok made easy — see also former Viner Jay Versace, another master of the form. Frey perfects it here with his quick shifts in perspective, timed to his palpitating chest. —Eduardo Carmelo Dañobeytia
There are about a zillion videos on the internet that use text on-screen as their general format, but none are as joyous or inventive as Donté Colley’s, which combine raw dance talent with mesmerizing animation. The Toronto-based dancer began uploading videos of himself to his Instagram in 2018 in which he dances to fun music, then overlays each move with emojis — sparkling hearts spill out of his head while he smashes a negative thought with a little cartoon hammer, a burst of confetti exploding across the screen. Each video has its own inspirational messages, like “You got this!” or “Keep going!” or sometimes “Get out cho feelings.” (In 2019, Ariana Grande invited Colley to be in the video for “Monopoly” so she could use his edit style.)
There’s essentially zero limit to what a text-on-screen video can look like, from recipe tutorials to TikTok challenges where a person points to the space next to them and then adds text that pops up on-screen, set to the beat of the music. Digital creators have been experimenting with it since the earliest days of internet virality (remember eBaum’s World?), with notable trailblazers like Bill Wurtz adding psychedelic graphics, text, and music to his frantic video essays. In the smartphone era, creating a text-on-screen video is as simple as Snapchatting a friend, and everything from font use to timing can change its entire meaning. These days, text on-screen is easy for even the Luddites among us, so doing it well is its own artistic feat — one where both text and visuals play off one another in a constant, winking feedback loop. —RJ
On its face, a POV, or point-of-view video, is a relatively standard format — consider any GoPro footage, handheld documentary, or, well, a very large segment of porn, all of which capture a scene from a certain person’s perspective. But on TikTok, the POV is collaborative, inventive, and weird as hell. No meme better exemplified the comedy of the form than Danielle Cohn dancing to Usher’s “I Don’t Mind.” Cohn, a teenaged Musical.ly-turned-TikTok star whose rise to fame has been marked by several controversies (usually about her age and what is or isn’t appropriate for it), uploaded the original in 2019. On its face, there’s nothing that special about the video; teenagers dance to songs in their bedroom all the time on TikTok. What made this particular video the genesis for such a creative explosion, is in two strikingly aggressive hip thrusts she makes during the dance. Other TikTok users started “duetting” it — a feature that allows you to respond to a video by filming side-by-side — pretending to be thrown across the room by her hip motions, leaping onto a bed or against the floor in an adjacent frame, and creating the illusion that Danielle’s hip is literally knocking them out. The real boom came after the dance had become an enormous meme. People began to expand the joke, duetting Danielle as objects inside the room — “you’re watching her from inside the Forever 21 bag,” “you’re the lice in Dani’s hair, “you’re her bones” (there are audible cracks). In doing so, they combined TikTok’s most important editing feature — the ability to remix, or “duet,” what’s already been done — with the platform’s signature surrealism. —RJ
Absurdism, pure and simple. “Lorde Getting Sick From Pickles” is a brilliant example of a category of video popular on TikTok and often shared on Stan Twitter. User @imcaucasianking used intentionally shitty editing techniques to stitch together a deranged little film where the pop singer Lorde is put in a comically mundane situation: She orders a cheeseburger at McDonald’s (set up with an exterior establishing shot as just “Donald,” with one arch) and ends up getting sent to the hospital because of a pickle-induced allergic reaction. The visuals do not cohere: A cutout video of Lorde talking is plopped onto a low-res stock photo of a McDonald’s. A clip of a YouTuber biting into a burger is used to represent Lorde eating. Her face is chroma-keyed green to indicate she’s getting sick. A reaction video of Britney Spears running away from the camera represents a worker fetching the “manager,” who is “played” by a popular reaction image character — an easy laugh for Stan Twitter regulars. There’s a whole world of these videos: YouTube user Dariannas Eggs is known for putting pop divas in fatal and embarrassing situations with rudimentary video-collage editing. A variation of the form is made by TikToker @kevinatwater, who inserts himself into his pop divaaudio-visual collages. These videos are the closest that filmed media has come to replicating the pure, anarchic creativity of playing Spice Girls with Barbie dolls. They bring us back to that boundlessness. —RA
Parodying the heightened production beats of reality TV isn’t exactly new; shows like 30 Rock and Kroll Show have been doing it for years. But this video by TikToker Bomanizer Martinez-Reid is a classic in the realm of amateur creators. Here, Martinez-Reid and a friend act out a relatable Gen-Z situation — not liking the caption that a friend adds to an Instagram photo of you — and run it through the Bravo machine. The power of the edit comes in its use of stock sounds: Housewives music and sound effects that signify shade. Drawing on reality-TV clichés has become a superpopular TikTok trend. One audio clip — a dramatic sound drop from the series Bad Girls Club — has been used over 1.5 million times on the app (and was first used by Isaiah Washington), giving ironic heft to mundane “plot twists” and confrontations. It’s a style that both makes fun of how overproduced reality TV is and demonstrates how we’ve all become our own reality stars and producers — daily life, Kardashianified. —RA
Superhero edits on TikTok — where users give themselves abilities like flight or control over lightning or fire — can get their overachieving creators a lot of attention: You won’t find a premade filter on the app that can generate all these objects and effects for you. TikTok user @xxd222 (whose more than 890,000 TikTok followers are nothing compared to the 3.9 million she has on the app’s Chinese counterpart, Douyin) adds kung fu and superpowers to her cooking videos, as in this demonstration of how she makes the pastries called mooncakes. While most cooking videos aim to be simple enough that viewers can replicate the results, xxd222 uses magnets to pull the moon down to Earth and flatten her dough, summons a chicken to fly overhead and drop eggs into her hands, and spins herself above the bowl when it’s time to mix ingredients. Over-the-top sound effects and fake explosions help make the final shot (a conventional close-up of a mooncake being split in half) a hilariously mundane payoff. Another creator on TikTok, Julian Bass — the self-proclaimed “CEO of Edits” — uses VFX to transition between tricks where he turned his body semi-transparent or separated his head from his body. Last summer, he was signed by a talent agency after a TikTok in which he replicated the powers of characters like Ben 10 and Spider-Man caught the eye of a Marvel director. To their followers, creators like xxd222 and Bass are heroes in their own right. —JZ
Musicians on the internet have an ear for turning anything — a video, a Reddit thread — into a song. One of the earliest was Songify the News, a web series responsible for the viral “Bed Intruder Song.” More recently, TikToker Charles Cornell is known for his piano accompaniments to Cardi B’s viral rants. The musician Lubalin creates songs out of absurd conversations he finds posted online. Over on Instagram, DJ iMarkkeyz (along with iComplexity and Suede the Remix God) has made an art out of not only transforming memes into songs, but creating an accompanying video collage. iMarkkeyz got his start on Vine, when remixing videos and sounds was its own burgeoning genre among stans, comedians, and musicians. In his now-famous “Coronavirus,” everyone from Elmo to Childish Gambino moves in sync with his remix of Cardi B saying “Coronavirus! Shit is real.” This is editing to create a vibe, the way a DJ would at a club.
TikTok utilizes beat-sync tech, where a slideshow of photos and videos changes depending on the rhythm and frequency of a sound. It’s led to trends where you upload random videos and let TikTok create montages for you (a feature that’s commonly used for fan edits). But to make something like “Coronavirus,” where the music and the images are in perfect harmony, takes an editor’s attention to detail, aligning movement with sound so cohesively it feels no longer like a compilation. It’s also a good example of how an edit can combine a multitude of cultural references and somehow make them all work. —ZH
“Can we stop dueting videos when we have absolutely nothing to add to them?” It was a reasonable plea, posted by user @johnson_fran in November 2020 — and other users responded by duetting her original, then duetting those duets, and so on. The “Stop dueting videos” Frankenstein chains were easily the most genius use of TikTok’s duet feature, precisely because they weaponized its very purpose. They’re just one example of the “meta edit,” wherein the video knowingly subverts the viewer’s expectations of what the editing might look like.
To a generation that grew up watching YouTube videos made by experimental amateurs, the meta edit reveals that not only are the characters in on the joke, the tech guy behind the scenes is, too. Consider this video, in which a girl films a gag using the typical shot-reverse-shot front-facing TikTok format, then cuts to a wide shot showing what it would look like if someone walked in while she was making it. It offers a look at how embarrassing it is to perform the physical act of making a video designed to go viral on social media. Another example begins with a mundane attempt at a trending challenge, then acknowledges the emptiness of catering to digital algorithms as if they were ancient sun gods using frantic sound effects and trippy, overlapping visuals. You can offer all kinds of new editing tools on your video app — but you can be sure that people will find a way to use them against you. —RJ
Did Tom Cruise really join TikTok to film himself discovering bubblegum at the center of a lollipop and tell a story about meeting Mikhail Gorbachev? No — those eerily accurate videos circulating on the app in February 2021 under the username @deeptomcruise were deepfakes created by Belgian visual effects specialist Chris Ume, and were the first (and so far only) TikTok deepfakes to penetrate mainstream discourse. Much like the actual Tom Cruise, they did end up freaking out a lot of people.
Deepfakes are visual or audio content that have been manipulated by artificial intelligence to look or sound like someone else. The term was coined by a Redditor known for posting AI-generated celebrity porn in 2017. Earlier that year, researchers at the University of Washington terrified the world when they released a realistic-looking deepfake of Barack Obama delivering a speech he never gave. Since then, apps like Reface and FakeApp allowed anyone with a smartphone the ability to, say, make Elon Musk sing the “Numa Numa” song or Joan Didion sing “What Is Love” (albeit in a rather unconvincing way). Deepfakes have, of course, been used for nefarious purposes, mostly as revenge porn. But Ume’s shows how they can also be a playful genre of internet art. To make his TikToks, Ume enlisted a Cruise impersonator and put in weeks of work using professional video-editing tools and the open-source algorithm DeepFaceLab. So many of the internet’s most internet-y videos have revolved around exposing how the editing sausage gets made. Deepfakes are the opposite: an attempt to trick the brain into seeing something and to delight in the trickery. —RJ
Can a video encapsulate the Internet? At first found on Twitter, and more lately on TikTok, the meme recap is essentially a music video of a series of memes — sped up, slowed down, rewound, and blended together inside the entire Adobe Creative Suite. It features some of the most intriguing editing moves on the internet in 2021. It’s hard to pick just one, but this video set to Charli XCX’s “Unlock It (Lock It)” — a 2017 song that has recently gone viral on TikTok — jam packs an entire acid trip’s worth of memes into a couple minutes: the Tiny Twinz dancing over a video of Ella Emhoff’s runway walk, Normani doing her “WAP” choreo encircled by the K-pop group Loona, drag Velma. Trying to unpack each layer of reference could fill the Library of Congress. The video, made by @twerkuwu and titled “Stan Twitter Music Video 6,” is not unlike a fan edit, if the object of fixation were the Internet itself. Memes are superimposed onto others with ghostlike opacities; a greenscreen in one meme simply means an opportunity for another overlaid on top. Through their dizzying rhythms, these videos approach the abstract and artistic. Watching them is like getting an IV hookup of pure internet chaos. As Morpheus said, “No one can be told what the Matrix is, you have to see it for yourself.” —E. Alex Jung
Bird, the electric scooter company that helped launch the global micromobility boom, is planning to go public via a reverse merger with a special acquisition company, or SPAC, according to dot.LA. Bird is merging with Switchback II Corporation, a Dallas-based “blank check” company focusing on companies reducing carbon emissions, according to documents reviewed by the website.
Bird is the latest transportation company, but one of only a few e-scooter companies, to go public. A record number of companies have gone public this past year by merging with SPAC shell corporations, which avoids the scrutiny of a traditional IPO.
Reports first surfaced last November of Bird’s SPAC ambitions, after Bloomberg reported that the Santa Monica-based company was working with Credit Suisse to find a potential partner. Spokespersons for Bird and Switchback II Corporation did not respond to a request for comment.
The deal will net Bird hundreds of millions of dollars in cash, which it can use to fund its operations as it continues to chase profitability. Scooter sharing is a cash intensive business, with companies routinely spending more on each scooter than they take in with revenue. Very few companies operating scooter fleets have succeeded in turning a profit.
dot.LA, whichhas gotten a look at the pitch deck, has more details about the transaction:
The transaction values Bird at $2.3 billion, below the $2.85 billion valuation it reached in the beginning of 2020. But that was before the pandemic, which drove 2020 revenue down to $95 million, a 37 percent decline from 2019, according to a deck pitching the deal seen by dot.LA.
Bird first launched its scooter sharing service in Santa Monica in September 2017. Since then, it has grown to over 100 cities, facilitated over 10 million rides, and raised cash at an unprecedented pace. It has the distinction of being the fastest startup to achieve a $2 billion valuation.
But the pandemic has taken a serious toll on the company. Bird saw its ridership numbers plummet at the onset of the lockdown last spring. Last March, the company laid off over 400 employees in a now infamous Zoom call.
But as lockdowns ease and customers return to scooter sharing, Bird’s woes continue. The company was snubbed by a number of major cities issuing permits to scooter operators, including Chicago, Paris, and San Francisco. Bird was recently selected to participate in New York City’s inaugural scooter program — a decision that may have helped buoy the company’s long-term financial prospects.
Bird has grown increasingly reliant on revenue from its franchising program, in which the company sells its older scooters to small operators and takes a cut of each ride. The program, which is called Bird Platform, has led some operators to fall into deep debt, OneZero reported last year. The company has since launched Bird Platform in countries like Switzerland and Estonia, cheering investors who hope it will lower Bird’s labor and capital expenses.
In January, The Information reported that Bird was nearing a deal to raise more than $100 million in convertible debt from some of its existing investors. The debt, which could eventually be converted into stock, would help Bird avoid selling shares at a lower price than in earlier fundraising rounds. But the company has yet to disclose whether that deal has gone through.
Luis Antonio has been thinking about time loops for a long time.
In 2006, when he was working as an artist at Rockstar Games, the studio was soliciting pitches from staff for a new game idea. “I thought, ‘Let’s do a time loop game,’” he explains. “I was thinking something Hitchcock, like The Birds.” The studio didn’t take much interest. “They didn’t even look at it.” The same thing happened a few years later when he was working at Ubisoft. But he kept thinking about it. He tried to get some friends to work on the concept with him, but nobody wanted to give up their free time. “I gave up on the idea,” he says.
Years later, when Antonio was working as an artist on The Witness, which was developed by a much smaller indie team, he noticed that everyone around him seemed to be working on a side project. They would squeeze in some time with their personal creation while on lunch or after work. So he decided to learn how to program and pick up the concept again. “If I want this idea to be explored to its full potential, it makes sense that I actually do it myself,” he says.
That side project has gone on to become a big production called 12 Minutes. Antonio is now working with a small team and has partnered with publisher Annapurna Interactive, with a voice cast featuring stars Daisy Ridley, Willem Dafoe, and James McAvoy. 12 Minutes is a tense thriller that has players reliving the same period of time over and over again — the titular 12 minutes — as they try to uncover a startling mystery. “It grew into a more refined and nuanced experience,” Antonio says of the long road from side project to full commercial release.
In 12 Minutes, you play as a husband, voiced by McAvoy, who comes home from a long day to what should be a romantic evening with his wife, played by Ridley. They live in a tiny apartment, and just as they’re about to sit down and enjoy dessert, a man claiming to be a cop (played by Dafoe) bursts through the door and accuses the woman of murder. When the man interferes, the supposed cop chokes him — but instead of dying, the man goes back in time to the beginning of the evening.
At least, that’s how things went for me the first time I played. 12 Minutes is a game about experiencing the same period of time repeatedly, choosing different actions each time to hopefully learn new information. During my second loop, I knew Dafoe’s character would tie up my wrists, for instance, so before he arrived, I grabbed a knife to cut myself free. (I still ended up dying.) There are lots of little elements like this tucked away; Antonio describes 12 Minutes as a game about “accumulated knowledge.” The more you play, the more you understand.
Despite the star power behind the game, 12 Minutes still largely sticks to its indie roots. It’s a tight, compact experience. The apartment is small, as is the list of actions at your disposal at any given moment. You view the world from a top-down perspective, which was originally a practical choice — it made movement simpler for a first-time programmer — but ended up giving the game a distinct look. You never actually see the characters’ faces, so the game relies on animation and dialogue to convey meaning and emotion. It plays a bit like an old-school point-and-click adventure game mashed with a cinematic thriller — and that’s by design.
Antonio says he loves classic LucasArts adventure games but also finds the genre frustrating at times. “Point-and-click games have this ambiguity,” he says. “There’s a window, but you don’t know if you can open it or not. Suddenly, you can open it because you dragged this thing over. There’s this frustration that comes out of the way they were designed.”
That’s something he wanted to change with 12 Minutes. “How can I make a very tight vocabulary where, the moment you get into the apartment, there are no questions about what you can use and what you cannot use? If you have a glass of water and you have a sink, I don’t have to tell you what’s going to happen if you drag the glass to the sink. All of the elements you can use are very clear. After one loop, you know everything you have for the rest of the game.”
The small space of the apartment, and the relatively limited number of items in it, are designed to make the experience clearer and more intuitive. Designing 12 Minutes became a process of removing things — objects, interactions, etc. — in order to make everything easier to immediately understand. “The more open it is, the more frustrating it is,” says Antonio. “By removing possibilities, the experience becomes a lot more pleasant.”
One example is the time element. Despite how important it is, it’s not exactly front and center; you won’t see a timer counting down 12 minutes. But after a few loops, you can get a sense of how much time has passed. Maybe you’ll remember the sound of a car outside that drives by a few minutes in or notice as the sun starts to set. It’s subtle, but that wasn’t always the case. “Early on, there were clocks everywhere,” says Antonio. “You could look at a phone to see the time, there was a clock on the wall. But I realized that if you do four or five loops, you get a feel for when things will happen.” This also had the side benefit of further immersing players in the time loop, forcing them to pay closer attention to small details.
The same goes for the loop itself. In 12 Minutes, you die repeatedly, but because it happens so quickly, it’s not particularly frustrating. You have enough time to make some progress, but if you make a mistake, you don’t have to wait long to try again. “Imagine the loop is five hours, and by hour four and a half, you make a mistake and want to try everything again,” says Antonio. “Here, nothing is further away than a couple of minutes.”
And while one of the big selling points of the game is its star-studded cast, originally, voice acting wasn’t even part of the plan. It wasn’t until Antonio partnered with publisher Annapurna, which has plenty of connections on the film side, that it became a possibility. He was able to direct the actors remotely; McAvoy and Ridley were on a soundstage in London, while Dafoe zoomed in from Berlin. Often in games, voice actors record their lines independently, but that wouldn’t really work for 12 Minutes, where the interactions between characters are so vital.
“When an actor says a line, the way he says a line will decide how the other one replies,” Antonio explains. “They would bounce a lot off each other, and the whole conversation could have a completely different texture. After four or five sessions, they were comfortable with the material.” Plus, he adds, “Willem didn’t want to be in a room saying lines to a wall.”
12 Minutes is listed as “coming soon,” and it’ll be available on PC and the Xbox One and Series X when it does launch. For Antonio, it was a chance not just to create his own game, but also merge two things he loves — adventure games and film — in an approachable way. It required patience, years of refinement, and lots of evenings and weekends spent teaching himself to code. “I didn’t know it would be this complicated,” says Antonio.
Ford announced the name of its upcoming electric pickup truck: the F-150 Lightning. The new electric vehicle is set to debut on May 19th at an event held at the automaker’s Dearborn, Michigan, headquarters. It will also be live-streamed.
The Ford F-150 has been the bestselling truck (and vehicle) in the US for more than 40 years, so its imminent electrification is a big deal. As such, Ford is treating the reveal as a major event, with dozens of ways to tune in across multiple platforms. The company is also hosting 18 in-person events around the country, including in Times Square and Las Vegas.
But while we’re set to get a closer look at the truck in the weeks to come, the F-150 Lightning won’t actually go on sale until 2022. Ford recently broke ground on a new $700 million manufacturing plant in Michigan where the F-150 Lightning’s production line will be built.
Ford has let a few details about the F-150 Lightning slip, including dual-motor configurations, mobile power generation, “hands-free” driver assist options, and over-the-air software updates. The 2021 F-150 PowerBoost is the company’s first plug-in hybrid version of the popular F series, with an EPA-estimated rating of 25 mpg on the 4×2 models.
Like all automakers, Ford is currently engaged in a costly project to boost its high-tech offerings, including EVs, partial and fully autonomous vehicles, connected-car services, and shared vehicles like electric scooters. The company has said it will spend $11.5 billion to produce over a dozen electrified models (including EVs and gas-electric hybrids) by 2022.
Sony is nearing the release of its next set of noise-canceling true wireless earbuds, according to a post at The Walkman Blog. There are images of the WF-1000XM4 from pretty much every angle, and they line up with an initial leak back in February.
The new design differs quite substantially from the aging 1000XM3s. Sony has seemingly downsized these earbuds quite a bit; they no longer have the flattened pill shape and are thus closer in line with competitors like the Galaxy Buds Pro, Sennheiser Momentum True Wireless 2, and other earbuds with a round outer design. Sony has moved its logo to the side, so the branding won’t be so obvious this time around.
The company is sticking with its signature black and copper / rose gold aesthetic. These earbuds are mostly black, but there are accents around the external mics used for noise cancellation. Sony has also revamped the charging case, which will apparently support wireless charging — something offered by many premium earbuds released after the 1000XM3s. The case might charge faster when plugged in as well since the charging output has been increased.
Based on Sony’s Federal Communications Commission confidentiality requests, The Walkman Blog suspects the WF-1000XM4 earbuds could be officially announced as soon as next month. Will they have water and sweat resistance this time? That was a significant omission on the previous model. What about LDAC support? Hopefully we’ll know all the details in just a few weeks.
For years, “infrastructure week” was a kind of running joke in Washington — a stand-in for all of the boring but popular rebuilding work that could happen if Donald Trump ever decided to get serious. Under Joe Biden, this idea has become something more tangible: the trillion-dollar American Jobs Plan, which lays out investments in everything from bridges to broadband.
But while Congress grinds away on the details, we still want to think bigger. The pandemic showed just how shaky America’s foundation really is, whether it’s having enough hospital beds to survive a pandemic or having a connection fast enough to support a Zoom call. Fixing our infrastructure means looking at all of that — even the stuff like power grids and credit systems that only get noticed when they break. It also means building out infrastructure for the next generation of technology, whether it’s satellite internet uplinks or a nationwide EV charging network. If we’re going to live in the future, we need to take a close look at the infrastructure we’ll need to get there.
So starting today, we’re doing just that. All week, we’ll be running down the hidden structures that make our world work — in interviews, on video, and even in a live event focusing on how we can build a better internet.
Welcome to Infrastructure Week. It’s been a long time coming.
Venom: Let There Be Carnage, the extremely on-the-nose sequel to 2018’s Venom, has gotten its first trailer ahead of its pandemic-delayed September 24th release date. It promises more of the peculiar buddy cop comedy and Tom Hardy’s over-the-top acting that made the original a surprisingly fun watch.
Picking up after the last movie, the trailer shows that Eddie Brock and the Venom symbiote have achieved a sort of domestic peace: a “no eating people” sign is tacked up on the wall of Brock’s apartment, and Venom helps Eddie make breakfast in what might be the worst Ratatouille impression of all time. He’s trying.
Things shift focus to Cletus Kasady, a murderous serial killer (Woody Harrelson, doing his best Dark Knight Joker impression) who gets a hold of his own symbiote, the titular Carnage, setting up the movie’s headline brawl. (Presumably, the film will have some sort of explanation for why on earth Kasady is being selected, of all people, to be the test subject for an incredibly powerful alien symbiote.)
Despite the almost-exclusive focus on Spider-Man villains in the trailer, there’s still no sign of the heroic web-slinger himself (whether played by Tom Holland, Andrew Garfield, Tobey Maguire, or any of the Spider-Verse crew). That’s presumably due to the intricate corporate negotiations around the character’s appearance in Disney’s Marvel Cinematic Universe movies. (Venom and its sequel are made by Sony, not Kevin Feige’s Marvel Studios, which produces the Holland Spider-Man films and almost every other major live-action Marvel film these days.)
Also of note: the trailer prominently focuses on the fact that Venom: Let There Be Carnage will debut exclusively in theaters. With vaccines rolling out and theaters starting to reopen, it seems that Sony is betting heavily on the fact that crowds will be ready to come out in person to see the movie on September 24th.
If broadband access was a problem before 2020, the pandemic turned it into a crisis. As everyday businesses moved online, city council meetings or court proceedings became near-inaccessible to anyone whose connection couldn’t support a Zoom call. Some school districts started providing Wi-Fi hotspots to students without a reliable home connection. In other districts, kids set up in McDonald’s parking lots just to get a reliable enough signal to do their homework. After years of slowly widening, the broadband gap became impossible to ignore.
So as we kick off our Infrastructure Week series, we wanted to show the scope of the problem ourselves. This map shows where the broadband problem is worst — the areas where the difficulty of reliably connecting to the internet has gotten bad enough to become a drag on everyday life. Specifically, the colored-in areas show US counties where less than 15 percent of households are using the internet at broadband speed, defined as 25Mbps download speed. (That’s already a pretty low threshold for calling something “high-speed internet,” but since it’s the Federal Communications Commission’s standard, we’ll stick with it.)
Maps like this are important because, for much of the past decade, the scale of the problem has been maddeningly difficult to pin down. Most large-scale assessments of American broadband access rely on FCC data, a notoriously inaccurate survey drawn from ISPs’ own descriptions of the areas they serve. Even as the commission tries to close the broadband gap, its maps have been misleading policymakers about how wide the gap really is.
Instead of the FCC’s data, we drew on an anonymized dataset collected by Microsoft through its cloud services network, published in increments by the company over the past 18 months. If the FCC monitors the connections that providers say they’re offering, this measures what they’re actually getting. You can roll over specific counties to see the exact percentage of households connected at broadband speed, and the data is publicly available on GitHub if you want to check our work or drill down further.
The disparity between FCC reports and the Microsoft data can be shocking. In Lincoln County, Washington, an area west of Spokane with a population just a hair over 10,000, the FCC lists 100 percent broadband availability. But according to Microsoft’s data, only 5 percent of households are actually connecting at broadband speeds.
Other areas stand out for the sheer scale of the problem. Nine counties in Nevada fall under the 10 percent threshold, covering more than 100,000 people and the bulk of the area of the state. Most of Alaska is a similar dead zone — understandably, given how rugged the state’s interior is — but similar gaps pop up in southwest New Mexico or central Texas.
Because it’s measuring usage, this data doesn’t distinguish between people who can’t buy a fast connection and people who simply can’t afford one, and in other places, you can see the connectivity problem as one more consequence of accumulated neglect. In Arizona, Apache County stands out as a long thin stripe in the northeast corner of the state, showing just 5 percent broadband usage. More than 70,000 people live there, most of them members of the Navajo, Apache, or Zuni tribes. According to the census, more than 23,000 of them are living in poverty, by far the highest poverty rate in the state. Across the border, San Juan County, New Mexico, shows 29 percent broadband usage, so the problem isn’t that the county is too remote or that the terrain is too difficult to manage. Apache County is simply poor, and the slow progress of the broadband buildout seems like a promise it will stay that way.
With the right eyes, you can even see the broadband gap as a dividing line for the US at large. Counties on the wrong side of the line are poorer and more remote, losing population even as the country grows. This is why there’s no broadband, of course: from a business perspective, building out fiber in Apache County is a losing bet. But the lack of fiber also stifles economic activity and makes young people more likely to leave, creating a cycle of disinvestment and decay that has swallowed large portions of our country.
In theory, this is a problem the federal government is getting ready to fix. President Biden has proposed $100 billion in broadband funding as part of the American Jobs Plan, more than twice what the FCC estimated would be necessary to bring broadband to 98 percent of households. But it will be a long walk from appropriating that money to actually laying fiber in places like Apache County. That road starts with taking a long look at the shaded parts of this map and thinking about what it will take to truly get them online.
Best Buy has an attractive deal on Razer gaming gear, including headsets, keyboards, mice, and mouse pads. If you buy any two listed on the site, you’ll get 20 percent off the total at checkout. But if you get three, you’ll get 25 percent off the total. This deal seems especially great if you just got a gaming PC and are looking to get set up with all of the peripherals you need. For around $50, you can get the BlackShark V2 wired headset, the excellent Viper ambidextrous mouse, or the DeathAdder V2 wired gaming mouse. Near the $100 level, you’ll be opened up to deals on wireless gaming mice, keyboards, and more.
Digital storefront Eneba is selling copies of Resident Evil Village for PC (redeemable on Steam) for around $43 when you use the offer code REVILLAGE (per Slickdeals). This game just came out last week, and it regularly costs $60 for a new copy. Slickdeals’ page says this deal has expired, but the offer code still worked at the time of publishing.
Resident Evil Village is the latest installment in the horror gaming franchise. It’s the direct sequel to RE7, and you assume the role of Ethan Winters, a guy who just can’t catch a break.
I’m resurfacing a good deal from last week on the Apple Watch SE that’s still happening at B&H Photo. You can get the Nike edition of the 40mm Watch with GPS and LTE connectivity for $289. This configuration usually costs $329, so you’re getting a decent discount. Considering this is just $10 more than the GPS model, it’s a no-brainer of an upgrade.
Even as the Series 6 drops in price (we saw the all-red 40mm GPS model selling for $250 a few weeks ago), this is still a good deal because of the cellular connectivity.
Pete Buttigieg was not an obvious choice for secretary of transportation.
As mayor of South Bend, Indiana, he oversaw a public transportation system with an annual ridership of about 2.5 million. As a 2020 presidential candidate, he rose to fame as one of the youngest and the first openly gay candidate to run for the highest office. But he wasn’t the first pick for many of the most transportation-minded voters.
After President Joe Biden’s victory, Buttigieg’s name was mentioned for a number of cabinet positions, including Veterans Affairs, United Nations ambassador, or ambassador to China. But in the end, Biden picked him to run the Department of Transportation.
It would turn out to be a prescient choice. It sent the signal that Biden clearly wanted to leverage Buttigieg’s political celebrity to advocate for his $2 trillion plan to shore up the nation’s infrastructure and create millions of jobs. So far, Buttigieg has been an eager player, sitting for dozens of interviews, holding public events, beating the bully pulpit on the need for a massive overhaul of transportation infrastructure, and even participating in a few cringe-worthy attempts at going viral.
Sec. Buttigieg sat down with The Verge’ssenior transportation reporter Andrew J. Hawkins to discuss the most important elements of the plan.
This interview has been lightly edited for clarity.
I’ve spoken to a lot of experts over the years about infrastructure, and they always tell me that the goal should be to “future-proof” infrastructure against the possibility of disruptive change. And so I just wanted to start by asking you: in what sense do you feel that the president’s jobs plan future-proofs our infrastructure? What elements do you think are sort of the most forward-thinking?
Yeah, that’s a big part of it. I mean, that’s one of the reasons you see $50 billion committed to the idea of resilience. When we fix things, let’s fix them right, not just redo the status quo. And let’s be ready for a future where the right answer is going to look a little different than it did in the past, especially in a changing climate.
Of course, future-proofing also means accounting for the fact that a lot of our means of getting around are going to evolve and change over time. We’re trying to focus on things like transit-oriented development, active transportation, different kinds of mobility — things that are going to make sense even as we have a shifting future and shifting patterns of life, as the pandemic showed us in an accelerated fashion. Then, of course, you have the investments in electric vehicles, preparing for the electric vehicle future by deploying a charging network of half a million chargers around the country, creating the kind of rebates or tax incentives that are going to be needed to make sure that electric cars are not a luxury item.
And really recognizing that has to go hand-in-hand with improvements to our energy grid and our energy generation in general, so that we can capture the benefit of that. And those are just some of the things that are in this. I would also point to the R&D dimension. It’s not getting as much attention. But the idea of creating real research assets above and beyond what we’ve had in the past on everything from stuff people could imagine off the top of their heads like, you know, new transportation technologies, to some really unsexy, incredibly important stuff like pavement, where you have a lot of promising forms of concrete that could be carbon negative, permeable pavements that help with stormwater issues. There’s so much that we’re just beginning to discover in terms of things as basic and unnoticed as the surfaces that we walk and drive on.
The top-line figure is $2 trillion. I’ve heard some folks on the Republican side say, “That’s too much. It’s going to riddle the country with debt.” But I’ve heard a lot more credible folks say, “It’s not enough.” I’m wondering if you think that this is actually a number that could go higher as this bill winds its way through Congress?
Let’s remember that this represents the largest investment in American jobs since World War II. And this is not a minor proposal. This is designed to sit on top of what already happened in terms of surface transportation reauthorization. Not all of America’s spending on infrastructure for the future is going to be federal spending, right? This is part of a bigger picture where we continue to see work happening at the local and state level. And to the extent that we can support the mobilization of private capital to where we know it’s not going to happen without good federal leadership. This is a major, major investment in setting America on the right path for the years ahead.
So the administration came out with a very ambitious goal about halving the amount of carbon emissions by the year 2030. Transportation is a huge driver of carbon emissions. You’ve spoken about electrification, but what are some of the other elements of the plan that would help get the carbon out of transportation? It seems like it’s just going to be an enormous challenge.
We talked about electrification, but a lot of it is also mode shifting the way people get around. If you’re going to be in a vehicle, we want that vehicle to be low and zero emissions. But we also need to create some alternatives so that you don’t have to drag two tons of metal with you everywhere you go. That’s why we’re making sure that we improve our support for transit. This plan doubles funding for transit at a federal level. It’s why things like that matter — things like rail, rail for passengers obviously, a great alternative, especially on short and medium routes, to more carbon-intensive ways of getting around.
But [it’s] also making sure we support the movement of freight on waterways and rails where that’s the most carbon-efficient solution for cargo. All of these things have to fit together. It’s an incredibly networked and layered set of solutions. Because the reality is we can’t just rely on a paradigm from 100 years ago about how we move around. And then you look further into the future, ways to decarbonize the maritime and aviation sectors, including sustainable aviation fuels. You know, there’s a lot of good technology out now that exists, but they’re nowhere near the scale that’s going to make it possible to drive the cost down and to get the most benefit.
Climate change is such an existential threat to our way of life. It’s hard to wrap our heads around the idea that, in the future, [we] won’t be driving or taking planes as much. Is that something you feel needs a psychological shift in the American public? How’s that going to look do you think?
I think it’s about balance. It’s about making sure that people can get to where they need to be, but maybe in different ways. I mean, even over time, thinking about commuting distances when we design cities and design housing in the first place. So yeah, it’s not just taking the manner and length of trips that we have today, assuming it’ll be the same way forever, and trying to make it more environmentally friendly. It’s also about imagining the kind of trips that we have to take now and making them more manageable, shorter, or sometimes making them obsolete. But look, people and goods will always need to move around the communities, the country, and the world. So we have a responsibility to make sure that every mode of getting around is cleaner than it used to be.
You mentioned transit. Transit was facing a lot of really stark challenges, even before the pandemic happened. I’m wondering what you think transit needs in order to not only expand and become as convenient and reliable as we’d like it to be, but also as safe in order to encourage those people who do use transit to come back and continue to use it? And with the pandemic having an effect on the way we work and where we work, how is that going to make things even more challenging?
We know that commuting patterns are going to change after the pandemic. And I think only time will tell exactly how we need to become a country in a society where transit is a means of choice for getting around. I heard somebody pose the question of what real development looks like. Is it where every low-income person has a car? Or is it wherever the high-income person would prefer to take the subway or the bus? We want to make sure that you can get around and choose to get around in ways that are more efficient, not just in terms of pollution, but also in terms of congestion. And transit, obviously, is a big part of that.
Transit is also changing, right? We’re getting smarter about it. You look at the variety of options that are emerging, in addition to what we’re used to, with subways and buses and light rail. We have [bus rapid transit] becoming more and more prevalent in some communities. Any time you can get the majority of the benefit for a fraction of the cost, we’ve got to look into those possibilities, too. And of course, you have micromobility, which we don’t necessarily think of as transit, but active transportation, that kind of overlaps between active transportation and whatever we decide to call things like scooters and e-bikes. All of these things, I think, hold a ton of potential for breaking us out of the old paradigm of how you get around.
We’ll get to micromobility in a second, but really quick first, I wanted to ask you about Vision Zero. It’s something that has become very popular in some cities around the country. Would you support a national Vision Zero goal? No traffic or road deaths by, say, 2050?
I certainly believe in national support for that concept of zero fatalities. I think the most promising way to get there is to build up from the community level. It sounds to some people like pie in the sky, except you see communities that are actually doing this or making incredible progress. Recently, Oslo, [Norway], I think, had a year with zero vehicle deaths and almost zero pedestrian deaths. I’ve got to double-check the numbers there. But if communities can do it at the community level, that gives us tools to build into a national picture. As a former mayor, it won’t surprise you that one of my favorite tools to deploy is federal support for local action because I don’t believe we’re going to cook up all of the solutions here in Washington. But we’ve got to support the people at the local level and then cross-pollinate them when somebody hits on something good.
I want to ask you kind of a weird question, and I don’t know how you’re going to respond to this. But we saw last year during the election how cars, and especially large trucks and SUVs, would showcase in our larger political and cultural conflicts that we were having in this country, with certain people using vehicular intimidation against their political opponents.
EVs are often dismissed out of hand by people who prefer large emissions-belching vehicles. And there’s an academic who calls this phenomenon “petro-masculinity.” I was wondering if you have any thoughts on whether we can reverse this trend of vehicular intimidation and petro-masculinity and what the federal government can do about that?
That’s actually a new word for me. Look, for Americans, cars have always been more than a means to an end. And that’s okay. I mean, they have cultural significance. They have emotional significance. And we don’t have to do away with that. But it does have to evolve. And I think we can get to a place where we take a lot of pride in the evolution of our cars, especially when you look at where EVs are now. I think some people picture EVs, and they think of small cars for getting around urban neighborhoods. And that’s one kind of EV.
But so much of the stuff coming out of Detroit, as well as newer companies, in terms of the kinds of trucks and SUVs that they’re developing on an electric basis, are also really remarkable. And I think they still speak to that itch that I don’t think of as uniquely masculine, but perhaps is particularly American, of wanting to get out there in a muscular way on the open road and have these vehicles perform. But you know, again, I don’t think it has to be locked into the old way. I mean, I think there was probably a time when a man’s relationship with his horse had more cultural signature and social significance than it does today. But it doesn’t mean that we’ve abandoned the special understanding about the way people and horses relate. We just don’t depend on them as a way to get around the way we used to, which is probably better for the horses as well as people.
The jobs plan wants to incentivize manufacturers to make it easier to transition to electrification. Some countries around the world have actually gone so far as to say we want to phase out gas-powered cars at a certain date, and some states have said that as well, California most notably. Do you see a need at the national level to say we need to phase out the production and selling of gas-powered cars by a certain date?
That’s not our approach federally, but I will say it’s remarkable seeing how industry is already headed that way. A lot of them are talking about all-EV fleet goals by very specific dates. But the other thing I want to point out is, no matter how good we are at EV adoption, no matter how quickly we get there, there are going to be a lot of internal combustion engines on the road for a long time. It’s one of the reasons why we can’t back off on having rigorous and ambitious tailpipe emission standards. In addition to driving EV adoption, it’s really got to be both.
Your department has decided to withdraw the rule that would have prevented California from being able to set its own tailpipe emissions. Do you see a need to also address what the prior administration did with regard to the rollback of the Obama-era CAFE standardson emissions?
We’re actively looking at that, bearing in mind the legal language around “maximum feasible.” CAFE standards have a remarkable track record of inducing industry to do more than they might themselves [have] thought possible and gaining a business perspective as well as a climate perspective. So [President Biden’s] executive order was clear in challenging us to quickly act, not only on the so-called Safe-1 rule, which is where we saw the notice go out, having to do with preemption, but also Safe-2, which takes a look at the Trump administration’s actions to try to dismantle that level of ambition. And that’s something that we’ll be continuing to evaluate going into the summer.
There is a bill that’s been introduced in the House that would offer a rebate for people who purchase electric bikes. You mentioned micromobility as a component of the solution of getting more people out of their cars. Your administration supports rebates and tax incentives for electric vehicles. Would you also support rebates and incentives for other types of electric vehicles, smaller ones that are less onerous on the environment?
Well, I haven’t seen the specifics of this legislation. But we definitely want to do everything we can to encourage the adoption of bike commuting by more Americans. And that has to do a lot of things. Part of it may be the economics of it. A lot of it is just the ease of getting around and making sure we’re encouraging cities to take on complete streets approaches and safe bike lanes. The other thing we’ve noticed is that there’s data suggesting that you really hit a tipping point, a good tipping point, once you get to a certain level of bike commuting, in terms of safety, because cars learn to expect bikes in a way that, frankly, they still don’t in most US cities. And so all of these things are taken together. Yes, the economics but also the convenience and certainly the safety are what we have to do in order to design for a world where we get not just the climate benefits but the congestion benefits and, frankly, the public health benefits of more people getting around on two wheels.
How’s it been biking around Washington lately?
You know, it’s pretty good. I’m trying to mystery shop the bike infrastructure around here. And I’ll say it’s impressive what the city has done. But you can tell it’s grafted onto a street system that wasn’t originally designed with this in mind, which is fine. I mean, you know, some of the older streets around here probably weren’t designed with cars in mind. It takes work and, you know, whether you’re talking about protected bike lanes or environments where you can safely share the road, when I’m commuting into the DOT here — which I don’t claim to do every day, but I do some days — on a bike, it’s good. But it needs more support from the federal level. And I think that’s true of cities large and small.
Speaking to the way that our cities are designed, in the first half of the 20th century, the highway system created physical barriers between mostly Black and minority communities. It was destructive, and it showed how transportation can be a civil rights and social justice issue. How do you adopt policies that help address some of those issues?
So to me, this is one of the most important things in the jobs plan. And we’re already writing it into things like our approach on discretionary grants here in the department. We made sure that the INFRA grants that went out earlier this year and the RAISE grants, formerly known as TIGER, reflect this as well. Precisely because we know it’s often been with federal dollars in federal policies that a lot of communities were destroyed or divided with the transportation infrastructure like highways, but we have a chance to put this right, and when we do, we think everybody benefits.
Sometimes that might mean removing a structure that caused harm. Sometimes it might mean bridging over and under it. The important thing is to connect where there has been division and to invest where there has been neglect. And that’s important, not just in terms of the kinds of neighborhoods and communities that get the infrastructure delivered to them, but also who gets to do the work. And that’s a real pressing issue that doesn’t get enough attention: getting more diverse participation in skilled trades and union labor and getting more diverse ownership of the businesses that get a shot at the billions and billions of dollars of infrastructure spending that is procured through government dollars in this country. That’s a big lift, but we’ve got to take it seriously so that our choices can actually enhance equity and not [contribute] to the problem, as has happened so often in the past.
We also just need to talk about it, and we need to face up to this, not in the spirit of guilt but in the spirit of problem-solving. I made some comments about this a few weeks ago, and certain pockets of the internet erupted. I was surprised they were surprised, but it revealed that there’s actually a lot of work we’ve got to do just to educate ourselves about this.
Yeah, be careful about those pockets of the internet. One last question, and I’ll let you go. Thank you so much for your time. The previous two administrations took a very hands-off approach to the development and regulation of autonomous vehicles. Do you expect this administration to follow suit?
I think that we need to have policy catch up to the technology. You know, it feels like a bit of a moving target. I have noticed that the widespread adoption of driverless cars has been exactly seven years away for roughly 10 years. But we are now at a level in terms of the technologies that are out there that we’ve got to be managing the safety implications of it. Not only because it’s so important, obviously, that these be safe. But also, frankly, because the industry is going to need some certainty in order to be able to continue development.
And look, automated vehicles hold out a lot of promise for seniors and Americans with disabilities. And you know, there are implications all the way down to the land use possibilities in a country that doesn’t need as much surface parking. But we’re still a ways away from that. And we want to make sure we get there responsibly, equitably, and safely. And that does, I think, mean that we need to lean in further, using our existing authorities, but also updating them — which, of course, is going to mean working with Congress.
Pony.ai’s next-generation robotaxi is distinctive because it appears to be missing the cone-shaped LIDAR sensor perched on the roof that’s typical of most autonomous vehicles. That’s because the startup, which is based in Silicon Valley and Guangzhou, China, is teaming up with Luminar to use the fast-growing LIDAR company’s sleek new sensors that are more flush with the vehicle’s roof.
The new vehicles with Luminar’s LIDAR sensor won’t be up and running until 2022, but Pony.ai founder and CEO James Peng said preparation was already underway for mass production of the next-gen robotaxi. After testing the vehicle next year, Peng said it will be ready for the company’s robotaxi customers in 2023. Pony.ai currently offers limited ride-hailing in its autonomous vehicles in five markets: Irvine and Fremont in California; Beijing, Shanghai, and Guangzhou in China.
Pony also announced that it has driven more than 5 million kilometers (3.1 million miles) across an operational domain of 850km and has provided over 250,000 robotaxi rides. The startup claims to be the first company to launch an autonomous ride-hailing operation and offer self-driving car rides to the general public in China.
The company was also recently approved to test its fully autonomous vehicles, without safety drivers behind the wheel, on public roads in California. Peng said Pony was currently seeking approval to include those vehicles in its robotaxi service in California. “We are actually at the final stage of getting the approval for travelers,” he said.
LIDAR, the laser sensor that sends millions of laser points out per second and measures how long they take to bounce back, is seen as a key ingredient to autonomous driving. Peng said that Pony would use four of Luminar’s Iris sensors, two on the roof and two more on each side of the vehicle, in order to “generate a very high resolution LIDAR image for our autonomous driving vehicles.”
Luminar says that its Iris LIDARs have a maximum range of 500 meters (1,640 feet), including 250-meter range with less than 10 percent reflectivity. Luminar’s sensors are also distinctive from most other LIDAR sensors, which were once famously described as looking like “spinning Kentucky Fried Chicken buckets.” In contrast, Iris is only about 10 centimeters tall. Austin Russell, founder and CEO of Luminar, described it as a “slim form factor that’s meant to be seamlessly integrated into the vehicle design.”
Apple is awarding Corning another $45 million investment from its Advanced Manufacturing Fund, in addition to the $450 million it’s already given to the US-based company over the past four years. According to Apple’s announcement, the investment will “expand Corning’s manufacturing capacity in the US and drive research and development into innovative new technologies that support durability and long-lasting product life.”
Corning provides glass for a variety of Apple products, including the iPhone, iPad, and Apple Watch. The two companies have a history dating back to the original iPhone. Last year, they collaborated on the iPhone 12 lineup’s Ceramic Shield technology, which Apple claims is “tougher than any smartphone glass” and makes its latest flagships four times more resistant to damage from drops. As well as Apple, Corning’s Gorilla Glass is used in phones from countless Android manufacturers including Samsung’s Galaxy S21 Ultra.
Apple doesn’t say exactly how Corning will use the $45 million investment, but its timing coincides with recent reports that Apple could launch a foldable iPhone in 2023. Back in 2019 we heard Corning was developing a bendable version of its glass, and last February the company said it expects devices using the technology to reach the market in 12 to 18 months. If it works, the glass could allow for durable foldable smartphones that don’t require a layer of plastic protection used in Samsung’s latest foldables.
“Today, when you buy a phone with Gorilla Glass, you’re touching glass … that’s what we’re working towards,” Corning said of its ambitions for bendable glass last year.
Vivo’s upcoming X-series flagship phones will receive three years of Android OS upgrades and security updates, the company announced today. The policy will come into force for phones launched after July 2021 in Europe, Australia, and India.
“We are making a promise to our customers that they will be able to enjoy a premium smartphone experience for an extended period and continue to benefit from the latest software features,” Vivo’s CTO and senior vice president Yujian Shi said in a statement.
Three years of OS updates is a big improvement over the two years that’s previously been the standard for most Android manufacturers, but in the future this could extend to as much as four years. Last December, Google and chip manufacturer Qualcomm announced they were working to make it easier for manufacturers to offer as much as four generations of Android OS and security updates, starting with devices equipped with Qualcomm’s latest flagship processor, the Snapdragon 888.
Vivo’s new policy puts it ahead of fellow BBK Electronics smartphone brands OnePlus and Oppo. As of 2018, OnePlus’s official policy has been to offer two years of Android version upgrades and three years of security updates. Meanwhile, the most recent statement we could find from Oppo (via AusDroid)says the company offers two years of security updates, and that it’s general policy is to offer two generations of Android OS updates. Today’s announcement will undoubtedly create pressure on Oppo and OnePlus to follow Vivo’s example.
These figures pale in comparison to Apple’s update history. Last year it released the latest version of iOS, version 14, on devices as old as 2015’s iPhone 6S, the fifth major update to have come to the phone.