AI Can Now Taste and Feel and It’s Freaking People Out

AI Can Now Taste and Feel and It’s Freaking People Out

Google Photos’ A.I. Glow-Up: From Snapshots to Cinematic Stories

A.I. just went into overdrive. Google Photos is getting new A.I. tools that turn your snapshots into full-blown videos. DeepMind dropped a powerful new toolkit for building real-time A.I. systems.

Meta trained a giant atomic-level model to speed up chemistry. Meanwhile, researchers invented a graphene A.I. tongue that can taste flavors with nearly human-level accuracy. Oh, and Zuckerberg? He’s building A.I. supercomputers the size of power plants.

Crazy stuff’s happening right now, so let’s talk about it. So over on the consumer side of things, Google Photos is quietly lining up a pretty big quality of life overhaul. If you’ve ever poked around that little plus icon to build a collage or an animation, you know it can feel a bit like hunting for hidden levels in an old game.

The company decided to pull all of those tools into one full-screen Create panel, spotted in version 7.36 for Android, though it isn’t live for anyone yet. When it finally lands, you’ll open the panel and everything will be right there. One-tap collages, quick animations, those flashy cinematic photos that fake depth and motion, even short mashups that blend still shots with video clips.

And because the whole push is A.I. first, two heavier features sit quietly in testing photo-to-video and remix. Those are the bits where Google’s image models do the heavy lifting, generating full video sequences or reshuffling your media with style cues. What’s strangely missing, at least in the Build Android Authority dugout, is the album builder.

DeepMind’s Gen AI Processors: Turbocharging Real-Time Intelligence

For now, you’d still have to step outside the panel and tap around the old way. Google hasn’t locked in a release date, but internal flags suggest the rollout isn’t far off, likely weeks, not months. Once the company is certain the A.I. filters won’t melt anyone’s storage quota, slide over to the developer corner because Google DeepMind just open-sourced something called Gen AI Processors under an Apache 2.0 license so anyone can download and build on it.

The idea is simple, keep every kind of data, text, sound, pictures, or bits of JSON, moving smoothly through an A.I. pipeline. Each piece is wrapped in a tiny package named a processor part. These packages travel in a single orderly line powered by Python’s Ascensio.

Because every step in the line can start work the instant the first package arrives, users see the first words of an answer much sooner. Engineers call that a faster time-to-first token. The library plugs directly into Google’s Gemini family.

It speaks both the standard request and reply interface and the faster Gemini live streaming feed so the model can begin answering, even while you’re still typing. To help developers get started, DeepMind includes three demonstration notebooks. One turns raw match data into live sports commentary, another gathers information from the web and produces quick summaries, and a third listens to a microphone and answers out loud.

Meta’s UMA Model Rewrites Atomic Simulations with Neural Speed

In Google’s toolkit, the Google Gen AI client and Vertex A.I. handle the heavy lifting of running the models. Gen AI Processors focuses on organizing how information flows to and from those models. Other orchestration libraries, such as LangChain or NVIDIA’s Nemo, can do similar jobs, but they were designed mainly around single-direction text chains or large neural graphs.

Gen AI Processors was built from day one for two-way real-time streams. A community, contribfolder, is already filling with extras. Early editions handle content filtering and PDF slicing, so its list of abilities should grow quickly.

Speaking of scale, Meta’s FAIR Lab is tackling chemistry and materials science with a model family called UMA, Universal Models for Atoms. Classic simulations rely on density functional theory, DFT is precise, but as the number of atoms rises, its runtime balloons, making large studies painfully slow. UMA follows a more modern recipe.

Train one very large neural network on a huge dataset, about 500 million atomic structures, so it can predict how atoms move almost as accurately as DFT, but in a fraction of the time. The network builds on a graph design known as ESEN, enhanced with extra inputs for total electric charge, magnetic spin, and the specific settings researchers usually dial into DFT. Training proceeds in two stages.

First, the model learns to predict forces quickly. Then, it is fine-tuned so its answers also conserve energy, an essential rule of physics. Even the smallest public version, UMAS, can simulate a thousand atoms at roughly 16 steps per second on a single 80-gigabyte GPU, and it keeps working with test cases containing up to 100,000 atoms.

Graphene A.I. Tongue Tastes with Near-Human Precision

When the team plotted computing effort against error, they found a neat straight-line pattern. Give the model more capacity, and its accuracy improves in a steady, predictable way. They also used a mixture-of-experts approach, several specialized subnetworks inside one larger model, moving from one expert to eight cut errors sharply, raising the count past 32 delivered only modest gains.

So most published versions sit near that sweet spot. On widely-watched benchmarks such as AdsorbML and MatBench Discovery, UMA matches or beats models built for one specific task. It still has limits, it struggles with atomic interactions that reach farther than six angstroms, and its fixed charge and spin categories mean it cannot yet handle values it never encountered during training.

Meta’s roadmap points to flexible interaction ranges and continuous charge embeddings as the next steps toward a truly universal atomic model. Alright now, a research group has just shown off what they call a graphene tongue, an artificial taste sensor that already clocks about 98.5% accuracy on flavors it has met before and 75-90% on 40 completely new ones. The hardware is clever, the team layers graphene oxide sheets, each only one atom thick, carbon lattices, dotted with oxygen groups, inside a nanofluidic channel that guides a tiny stream of liquid across the stack.

As molecules in that liquid bump into the graphene, they tweak its electrical conductivity in a signature pattern, rather like the way certain keys press specific piano strings. To teach the tongue what those patterns mean, the researchers ran 160 different reference chemicals covering the classic sweet, salty, bitter, and sour spectrum through the device and fed the resulting conductivity curves into a machine learning model. The training data also included complex mixtures such as dark roast coffee concentrates and cola syrups, so the algorithm learned to recognize blended flavor chords, not just single-note molecules.

Graphene Taste Tech and Meta’s AI Superclusters Hint at Sensory and Compute Futures

Importantly, the sensing layer and the tiny computer that interprets the signal live on the same chip, so data never has to shuffle over to a slower external processor. That slashes latency and fixes a longstanding issue with older electronic tongues, which often choked on the delay between detection and classification. Of course, it’s still a lab bench prototype.

The present rig is bulky and pulls more power than a mobile device could tolerate, so the next engineering sprint is all about shrinking and trimming the watt budget. If miniaturization succeeds, obvious use cases spring up. Quick taste loss screening for patients recovering from strokes or viral infections, food safety checks that flag spoilage before it hits shelves, or even robot kitchen assistants that adjust seasoning on the fly.

The findings landed in the proceedings of the National Academy of Sciences, giving the work a strong peer-review seal, yet the authors stress that a genuinely universal flavor sensor will need training libraries in the thousands of compounds. Until those broader benchmarks exist, the graphene tongue remains a proof of concept, but a promising one that pushes machine sensing a notch closer to human perception. While all that fundamental research bubbles, Meta is shoveling resources into plain old compute.

Mark Zuckerberg posted on Monday that the company’s first AI data center supercluster, codenamed Prometheus, is on track to light up in the year 2026 with more than a gigawatt of capacity. The SQL cluster, Hyperion, is specced to hit 5 gigawatts over several years. For context, a gigawatt can power something like 750,000 homes.

Meta Bets Billions on Superintelligence as AI Leaps Beyond Imagination

Meta wants that just for GPU. Zuck says the firm will burn hundreds of billions of dollars in the quest for superintelligence. Capital expenditures alone sit in the $64-72 billion range for the year 2025.

He’s also personally inviting high-profile recruits, reportedly dangling a $200 million package at an Apple generative AI lead, and already signing former GitHub chief executive officer Nat Friedman, plus former scale AI chief executive officer Alexandra Wang. Internally, there’s grumbling that LLAMA 4 slipped because the delta over LLAMA 3 wasn’t big enough. The supercluster push is the fix.

Investors don’t seem spooked. Meta’s stock closed just under $721 on Monday, up roughly 25% year-to-date. But here’s a question to end on.

If machines can already taste, create, and simulate atoms, how long before they start doing things we haven’t even imagined yet?

Let me know what you think in the comments. If you enjoyed this one, hit like, subscribe for more. And as always, thanks for reading.
Catch you in the next one.

en.wikipedia.org

Also Read:- New Autonomous AI Robots Just SHOCKED Everyone: AI & Robotics News

Hi 👋, I'm Gauravzack Im a security information analyst with experience in Web, Mobile and API pentesting, i also develop several Mobile and Web applications and tools for pentesting, with most of this being for the sole purpose of fun. I created this blog to talk about subjects that are interesting to me and a few other things.

1 thought on “AI Can Now Taste and Feel and It’s Freaking People Out”

Leave a Comment