Computing Interfaces | Memo ~ 2111

Machines that think alongside humans, not separately

Read

A new paradigm where machine intelligence becomes an extension of human cognition, continuously perceiving, inferring, and proactively responding to needs within physical and digital environments.

Remember the point when humans learned how to write 1000s of years ago, which changed our civilization's course? It gave us this insane tool to document our thoughts and pass them through space, and time. This led us to reuse the knowledge created by others and build our ideas over the top—a turning point in civilization’s overall cognition capabilities. A similar zero event is brewing just around the corner.

Imagine, rather than trying to make machines learn human skills, we attempt to place machine intelligence in front of humans right when they need it, in the form we need it. We call it the embedding of machine intelligence in humans (internally), or human machine integration. It is like equipping human cognition with a computer's processing and storage capabilities. Just Google it” can become so last season.

We are fully confident that this just can’t be solved by faster processors, or bigger screens. We need to rethink our computer interaction from scratch. We can look at the problem in 3 parts: 1) How can we make machines perceive the physical and digital worlds similar to humans? (Partly solved problem) 2) How can machines continuously inference and come up with thoughts similar to our anxious mind? 3) And most importantly, how can we build an interaction channel that becomes the extended part of the human body? (Of course not BCI)

An ideal solution to this is what we call Kera. It understands our objectives, perceives the physical and the digital world, constantly comes up with actions to reach objectives, and interacts with humans placing the right insight/information (not just ads) at the right time with the right context. It is no longer humans working machines, but machines working for them by understanding this world.

And this unit of intelligence will very obviously be extended to spaces. Similar to how electricity changed the potential of any space back in the early 1900s, this can change the way spaces work, and drive value. I can’t stop fantasizing about a scenario where a space agent works together with human agent and achieves their respective objectives in a retail store, or an industrial warehouse, wooh!

Meera is our experiment towards testing the presence of intelligence in a physical meeting room setup. Imagine all the interactions that you’d want to do in a meeting room but those happen without you making them them.

  • That quarterly report appears right when you say "let me show you the numbers.”
  • Too warm? Too cold? Your room senses and adjusts temperature before anyone needs to ask.
  • "How did different regions perform?" Metrics pop up with the calculations on the go.
  • "What did we decide?" Watch personalized meeting summaries with the pre-work on action items ready for your review.

More information can be found here - meera.3102labs.com, or write a line at t@3102labs.com

Thanks,

T

Questions we're obsessing over currently

  • How can continuous inference be achieved in the current architecture form?
  • How can we achieve fluid communication so that human attention never breaks unless needed? (is peripheral attention the answer?)
  • How would agent-agent interaction work in space and human setup?

------------------

Previously published internally on 21st November, 2024.

Featured

Conversational Recommendations

AI-driven tech evolution is reshaping advertising in different ways

Featured

Audio-Recommendations

AI is transforming how we use computers and see ads, from search to smart conversations.

Featured

Civilization Advancement | Theory (Part 1)

Ever thought about what marks the difference between 500AD & 2000AD?