Going to try to clear this up from speculation as best I can.
Niantic was a spinoff divested from Google Maps roughly a decade ago who created a game called Ingress. This used Open Street Maps data to place players in the real world and they could designate locations as points of interest (POI), which Niantic used human moderators to judge as sufficiently noteworthy. Two years after Ingress was released, Niantic purchased limited rights to use Pokemon IP and bootstrapped Pokemon Go from this POI data. Individual points of interest became Pokestops and Gyms. Players had to physically go to these locations and they could receive in-game items needed to continue playing or battle other Pokemon.
From the beginning, Pokemon Go had AR support, but it was gimmicky and not widely used. Players would post photos of the real world with Pokemon overlaid and then turn it off, as it was a significant battery drain and only slowed down your ability to farm in-game items. The game itself has always been a grind type of game. Play as much as possible to catch Pokemon, spin Pokestops, and you get rewards from doing so. Eventually, Niantic started having raids as the only way to catch legendary Pokemon. These were multiplayer in-person events that happened at prescribed times. A timer starts in the game and players have to be at the same place at the same time to play together to battle a legendary Pokemon, and if they defeat it, they'll be rewarded with a chance to catch one.
Something like a year after raids were released, Niantic released research tasks as a way to catch mythical Pokemon. These required you to complete various in-game tasks, including visiting specific places. Much later than this, these research tasks started to include visiting designated Pokestops and taking video footage, from a large enough variety of angles to satisfy the game, and then uploading that. They started doing this something like four or five years ago, and getting any usable data out of it must have required an enormous amount of human curation, which was largely volunteer effort from players themselves who moderated the uploads. The game itself would give you credit simply for having the camera on while moving around enough, and it was fairly popular to simply videotape the sidewalk and the running game had no way to tell this was not really footage of the POI.
The quality of this data has always been limited. Saying they've managed to build local models of about 1 million individual objects leaves me wondering what the rate of success is. They've had hundreds of millions of players scanning presumably hundreds of millions of POI for half a decade. But a lot of the POI no longer exist. Many of them didn't exist even when Pokemon Go was released. Players are incentivized to have as many POI near them as possible because this provides the only way to actually play, and Niantic is incentivized to leave as much as they can in the game and continually add more POI because, otherwise, nobody will play. The mechanics of the game have always made it tremendously imbalanced in that living near the center of a large city with many qualifying locations results in rich, rewarding gameplay, whereas living out in the suburbs or a rural area means you have little to do and no hope of ever gaining the points that city players can get.
This means many scans are of objects that aren't there. Near me, this includes murals that have long been painted over, monuments to confederate heroes that were removed during Black Lives Matter furors of recent years, small pieces of art like metal sculptures and a mailbox decorated to look like Spongebob that simply are not there any more for one reason or another, but the POI persist in the database anyway. Live scans will show something very different from the original photo that still shows up in-game to tell you what the POI is.
Another problem is many POI can't be scanned from all sides. They're behind fences, closed off because of construction, or otherwise obstructed.
Yet another problem is GPS drift. I live near downtown Dallas right now, but when the game started, I lived smack dab in the city center, across the street from AT&T headquarters. I started playing as something to do when walking during rehab from spine surgeries, but I was often bedridden and couldn't actually leave the apartment. No problem. I could receive sometimes upwards of 50km a day of credit for walking simply by leaving my phone turned on with the game open. As satellite line of sight is continually obstructed and then unobstructed by all the tall buildings surrounding your actual location, your position on the map will jump around. The game has a built-in speed limit meant to prevent people from playing while driving, and if you jump too fast, you won't get credit, but as long as the jumps in location are small enough to keep your average over some sampling interval below that limit, you're good to go. Positions within a city center where most of the POI actually are is very poor.
They claim here that they have images from "all times of day," which is possibly true if they literally mean daylight hours. I'm awake here writing this comment at 2:30 AM and have always been a very early riser. I stopped playing this game last summer, but when I still played, it was mostly in darkness, and one of the reason I quit was the frustration of constantly being given research tasks I could not possibly complete because the game would reject scans made in the dark.
Finally, POI in Ingress and Pokemon Go are all man-made objects. Whatever they're able to get out of this would be trained on nothing from the natural world.
Ultimately, I'm interested in how many POI the entire map actually has globally and what proportion the 1 million they've managed to build working local models of represents. Seemingly, it has to be objects that (1) still exist, (2) are sufficiently unobstructed from all sides, and (3) in a place free from GPS obstructions such that the location of players on the map is itself accurate.
That isn't nothing, but I'm enormously skeptical that they can use this to build what they're promising here, a fully generalizable model that a robot could use to navigate arbitrary locations globally, as opposed to something that can navigate fairly flat city peripheries and suburbs during daylight hours. If Meta can really get a large enough number of people to wear sunglasses with always-on cameras on them, this kind of data will eventually exist, but I highly doubt what Niantic has right now is enough.