French Government Uses AI to Spot Undeclared Swimming Pools — And Tax Them, by James Vincent

“The French government has collected nearly €10 million in additional taxes after using machine learning to spot undeclared swimming pools in aerial photos. In France, housing taxes are calculated based on a property’s rental value, so homeowners who don’t declare swimming pools are potentially avoiding hundreds of euros in additional payments.”

“The project to spot the undeclared pools began last October, with IT firm Capgemini working with Google to analyze publicly available aerial photos taken by France’s National Institute of Geographic and Forest Information. Software was developed to identify pools, with this information then cross-referenced with national tax and property registries.”

“The project is somewhat limited in scope, and has so far analyzed photos covering only nine of France’s 96 metropolitan departments. But even in these areas, officials discovered 20,356 undeclared pools, according to an announcement this week from France’s tax office, the General Directorate of Public Finance (DGFiP), first reported by Le Parisien.”

“As of 2020, it was estimated that France had around 3.2 million private swimming pools, but constructions have reportedly surged as more people worked from home during COVID-19 lockdowns, and summer temperatures have soared across Europe.”

“Ownership of private pools has become somewhat contentious in France this year, as the country has suffered from a historic drought that has emptied rivers of water. An MP for the French Green party (Europe Écologie les Verts) made headlines after refusing to rule out a ban on the construction of new private pools. The MP, Julien Bayou, said such a ban could be used as a ‘last resort’ response. He later clarified his remarks on Twitter, saying: ‘[T]here are ALREADY restrictions on water use, for washing cars and sometimes for filling swimming pools. The challenge is not to ban swimming pools, it is to guarantee our vital water needs.”’

Click here for the full article

How People Adapt to Cybersickness From Virtual Reality, by Rachel Cramer – Iowa State University

“While virtual reality has been around for decades, a combination of higher-resolution graphics, smoother tracking of the user’s movements and cheaper, sleeker headsets has propelled the immersive technology into arenas beyond gaming and military training.”

“In health care, VR has been used to prepare surgeons for complicated operations and help burn patients better manage their pain. In education, it’s opened doors for students to tour world famous museums, historical sites – even the human brain.”

“But Jonathan Kelly, a professor of psychology and human computer interaction at Iowa State University, says the biggest barrier to VR becoming mainstream is cybersickness. Previous studies show more than half of first-time headset users experience the phenomenon within 10 minutes of being exposed to VR.”

“Many of the symptoms – nausea, dizziness, headaches, eye fatigue, sweating and a lingering sense of movement – overlap with other forms of motion sickness. Kelly explained they’re all caused by conflicting sensory information.”

“‘When someone reads a book in a moving car, their eyes recognize a stationary environment while parts of the inner ear and brain that are involved in balance and spatial orientation pick up accelerations, turns and bumps,’ said Kelly.”

“In a virtual setting, the inverse is true. An individual’s visual system perceives the rush from a roller coaster ride while sitting on a coach. Even without the stomach drop or whiplash, the dissonance can make someone want to hurl.”

Click here for the full article

Google Adds AI Language Skills to Alphabet’s Helper Robots to Better Understand Humans, by James Vincent

“Google’s parent company Alphabet is bringing together two of its most ambitious research projects — robotics and AI language understanding — in an attempt to make a ‘helper robot’ that can understand natural language commands.”

“Since 2019, Alphabet [has] been developing robots that can carry out simple tasks like fetching drinks and cleaning surfaces. This Everyday Robots project is still in its infancy — the robots are slow and hesitant — but the bots have now been given an upgrade: improved language understanding courtesy of Google’s large language model (LLM) PaLM.”

“Most robots only respond to short and simple instructions, like ‘bring me a bottle of water.’ But LLMs like GPT-3 and Google’s MuM are able to better parse the intent behind more oblique commands. In Google’s example, you might tell one of the Everyday Robots prototypes ‘I spilled my drink, can you help?’ The robot filters this instruction through an internal list of possible actions and interprets it as ‘fetch me the sponge from the kitchen.”’

“Yes, it’s kind of a low bar for an ‘intelligent’ robot, but it’s definitely still an improvement! What would be really smart would be if that robot saw you spill a drink, heard you shout ‘gah oh my god my stupid drink’ and then helped out.”

Click here for the full article

VR Surgical Training Platform Raises $20M, Further Solidifying VR’s Place in Medicine, by Ben Lang

“VR surgical training platform FundamentalVR today announced it has raised a $20 million Series B investment to further expand its VR surgical training platform. The company is among a growing list of companies taking root in what appears to be a prime use-case for VR.”

“FundamentalVR today announced it has raised a $20 million Series B investment, bringing the company’s total fundraising to a purported $30 million. The round was led by EQT Life Sciences and prior investor Downing Ventures. As part of the investment, Drew Burdon of EQT Life Sciences will join FundamentalVR’s Board of Directors.”

“FundamentalVR says it combines high-fidelity medical simulations with VR and haptics so that trainees can ‘experience the sights, sounds, and physical sensations of real-life surgery.’ The goal, says the company, is to make it more affordable for medical institutions to bring ‘professionally accredited surgical training [to] their organizations.”’

“According to FundamentalVR, the investment will be used to ‘enable further development of [the surgical training platform], the machine learning data insights product, and [drive] geographic expansion throughout the US.”’

“‘Our platform can conduct a walkthrough of a procedure through to a full operation, facilitating surgical skills transfer—which is why we have been enthusiastically embraced throughout the medical industry, from med-device manufacturers to pharmaceuticals’ says FundamentalVR CEO Richard Vincent. ‘Our immersive environments transform surgical skills acquisition in a scalable, low-cost, multiuser way. We are excited to scale our vision of creating a medical education environment unhindered by borders.’”

Click here for the full article

Self-Taught AI Shows Similarities to How the Brain Works, by Anil Ananthaswamy

“For a decade now, many of the most impressive artificial intelligence systems have been taught using a huge inventory of labeled data. An image might be labeled ‘tabby cat’ or ‘tiger cat,’ for example, to ‘train’ an artificial neural network to correctly distinguish a tabby from a tiger. The strategy has been both spectacularly successful and woefully deficient.”

“Such ‘supervised’ training requires data laboriously labeled by humans, and the neural networks often take shortcuts, learning to associate the labels with minimal and sometimes superficial information. For example, a neural network might use the presence of grass to recognize a photo of a cow, because cows are typically photographed in fields.”

“‘We are raising a generation of algorithms that are like undergrads [who] didn’t come to class the whole semester and then the night before the final, they’re cramming,’ said Alexei Efros, a computer scientist at the University of California, Berkeley. ‘They don’t really learn the material, but they do well on the test.”’

“For researchers interested in the intersection of animal and machine intelligence, moreover, this ‘supervised learning’ might be limited in what it can reveal about biological brains. Animals — including humans — don’t use labeled data sets to learn. For the most part, they explore the environment on their own, and in doing so, they gain a rich and robust understanding of the world.”

“Now some computational neuroscientists have begun to explore neural networks that have been trained with little or no human-labeled data. These ‘self-supervised learning’ algorithms have proved enormously successful at modeling human language and, more recently, image recognition. In recent work, computational models of the mammalian visual and auditory systems built using self-supervised learning models have shown a closer correspondence to brain function than their supervised-learning counterparts. To some neuroscientists, it seems as if the artificial networks are beginning to reveal some of the actual methods our brains use to learn.”

Click here for the full article

Google’s Search AI Now Looks for General Consensus to Highlight More Trustworthy Results, by Mariella Moon

“You know that highlighted piece of text at the very top of a Google search results page when you look up a piece of information? That's called a ‘featured snippet,’ and it's meant to provide you with a quick answer to your query. Now, Google is making sure that the information it highlights is reliable and accurate by using its latest AI model, the Multitask Unified Model, so that Search can now look for consensus when deciding on a snippet to feature.”

“Google's Search AI can now check snippet callouts — those are the information with larger fonts that serve as heading for featured snippets — against other high-quality sources online. It can figure out if there's a general consensus for that callout, even if sources use different words or concepts to describe the same fact or idea. Google says this ‘consensus-based technique has meaningfully improved the quality and helpfulness of featured snippet callouts.”’

“But for some queries, such as those with false premises, displaying features snippets isn't the best way to deliver information. To address that issue, Google tweaked its Search AI so that this particular update reduces the triggering of snippets for those types of queries by 40 percent.”

“Google is now also making its ‘About this result’ tool more accessible. That's the panel that pops up when you click on the three dots next to a result, showing you details about the source website before you even visit. Starting later this year, it will be available in eight more languages, including Portuguese, French, Italian, German, Dutch, Spanish, Japanese and Indonesian. It's adding more information to the tool starting this week, as well, including how widely a publication is circulated, online reviews about a company, or whether a company is owned by another entity. They're all pieces of information that could help you decide whether a particular source is trustworthy.”

“Finally, in case Google's AI has determined that the overall results for a search query may have questionable quality, the results page will now display a content advisory. "It looks like there aren't many great results for this search," the advisory will say, telling you to check the source you're looking at or to try other search terms. It could help you stay alert and be on the lookout for potential fake information while checking the results the website had presented.”

Click here for the full article

In Simulation of How Water Freezes, Artificial Intelligence Breaks the Ice, by Princeton University

“A team based at Princeton University has accurately simulated the initial steps of ice formation by applying artificial intelligence (AI) to solving equations that govern the quantum behavior of individual atoms and molecules.”

“The resulting simulation describes how water molecules transition into solid ice with quantum accuracy. This level of accuracy, once thought unreachable due to the amount of computing power it would require, became possible when the researchers incorporated deep neural networks, a form of artificial intelligence, into their methods. The study was published in the journal Proceedings of the National Academy of Sciences.”

“‘In a sense, this is like a dream come true,’ said Roberto Car, Princeton's Ralph W. *31 Dornte Professor in Chemistry, who co-pioneered the approach of simulating molecular behaviors based on the underlying quantum laws more than 35 years ago. ‘Our hope then was that eventually we would be able to study systems like this one, but it was not possible without further conceptual development, and that development came via a completely different field, that of artificial intelligence and data science.”’

“The ability to model the initial steps in freezing water, a process called ice nucleation, could improve accuracy of weather and climate modeling as well as other processing like flash-freezing food.”

Click here for the full article

VR Is as Good as Psychedelics at Helping People Reach Transcendence, by Hana Kiros

“Fifteen years ago, David Glowacki was walking in the mountains when he took a sharp fall. When he hit the ground, blood began leaking into his lungs. As he lay there suffocating, Glowacki’s field of perception swelled. He peered down at his own body—and, instead of his typical form, saw that he was made up of balled-up light.”

“‘I knew that the intensity of the light was related to the extent to which I inhabited my body,’ he recalls. Yet watching it dim didn’t frighten him. From his new vantage point, Glowacki could see that the light wasn’t disappearing. It was transforming—leaking out of his body into the environment around him.”

“Since his accident, Glowacki—an artist and computational molecular physicist—has worked to recapture that transcendence.”

“A VR experience called Isness-D is his latest effort. And on four key indicators used in studies of psychedelics, the program showed the same effect as a medium dose of LSD or psilocybin (the main psychoactive component of “magic” mushrooms), according to a recent study in Nature Scientific Reports.”

“Isness-D is designed for groups of four to five people based anywhere in the world. Each participant is represented as a diffuse cloud of smoke with a ball of light right about where a person’s heart would be.”

“Participants can partake in an experience called energetic coalescence: they gather in the same spot in the virtual-reality landscape to overlap their diffuse bodies, making it impossible to tell where each person begins and ends. The resulting sense of deep connectedness and ego attenuation mirrors feelings commonly brought about by a psychedelic experience.”

Click here for the full article

Meta Is Putting Its Latest AI Chatbot on the Web for the Public to Talk To, by James Vincent

“Meta’s AI research labs have created a new state-of-the-art chatbot and are letting members of the public talk to the system in order to collect feedback on its capabilities.”

“The bot is called BlenderBot 3 and can be accessed on the web. (Though, right now, it seems only residents in the US can do so.) BlenderBot 3 is able to engage in general chitchat, says Meta, but also answer the sort of queries you might ask a digital assistant, ‘from talking about healthy food recipes to finding child-friendly amenities in the city.”’

“The bot is a prototype and built on Meta’s previous work with what are known as large language models or LLMS — powerful but flawed text-generation software of which OpenAI’s GPT-3 is the most widely known example. Like all LLMs, BlenderBot is initially trained on vast datasets of text, which it mines for statistical patterns in order to generate language. Such systems have proved to be extremely flexible and have been put to a range of uses, from generating code for programmers to helping authors write their next bestseller. However, these models also have serious flaws: they regurgitate biases in their training data and often invent answers to users’ questions (a big problem if they’re going to be useful as digital assistants).”

“This latter issue is something Meta specifically wants to test with BlenderBot. A big feature of the chatbot is that it’s capable of searching the internet in order to talk about specific topics. Even more importantly, users can then click on its responses to see where it got its information from. BlenderBot 3, in other words, can cite its sources.”

Click here for the full article

Surgeons Use Virtual Reality Techniques to Separate Conjoined Twins, by Adela Suliman

“LONDON — After emerging from a final risky surgery, Brazilian twin brothers Arthur and Bernardo Lima were met with an emotional outpouring of applause, cheers and tears from medical staff and family members”

“For the first time, the boys lay separated, face-to-face and holding hands in a shared hospital bed in Rio de Janeiro, after doctors there and almost 6,000 miles away in London worked together using virtual reality techniques to operate on the conjoined 3-year-olds.”

“The highly complex medical procedure separated the twins, who come from Roraima in rural northern Brazil and were born craniopagus, meaning they were connected to each other with fused skulls and intertwined brains that shared vital veins. Only 1 in 60,000 births result in conjoined twins, and even fewer are joined cranially.”

“Medical experts had called the surgery to separate the brothers impossible.”

“But medical staff from Rio’s Instituto Estadual do Cérebro Paulo Niemeyer worked with London-based surgeon Noor ul Owase Jeelani of Great Ormond Street Hospital to use advanced virtual reality technology to rehearse the painstaking procedure.”

“It involved detailed imaging of the boys’ brains including CT and MRI scans, as well as checks on the rest of their bodies. Health workers, engineers and others collated data to create 3D and virtual reality models of the twins’ brains to allow the teams to study their anatomy in greater detail.”

Click here for the full article