Back To The Roots
Another thing that excited me about IT industry was the idea of augmenting us with technology. Like Archimedes once said:
Give me a place to stand, and I shall move the world.
So, yeah, what I essentially wanted was to move the world. Why?
Since early days of my life, I've seen my grand pa building things. I've seen Russia being slowly disassembled to the pieces, and getting them sold, and then I've seen those who remade our country, and made parts of it (however small they could be) better.
It's so easy to destroy, and so hard to build.
And those who build, those who leave clean emptiness with a meaning, are equal to Antique gods, in my imagination.
With that, and with technology, I want to see us, humans, to become gods.
Gods capable of building entire worlds, capable of building anything we want.
Experiments: Building Better Worlds within Games
All of these games are sophisticated simulators of our world - or, at least, of some of the aspects of it, and of different scales.
In Fallout 4 you build settlements on a Boston Greater Area, in Frostpunk you work with a small city and a large territory around it. In C&C games you build a base on a large map. In SimCity you deal with entire city. In Transport Tycoon you deal with a greater area, where you not only build transportation systems, but also deal with making cities, too. In Surviving Mars and in Aven Colony you deal with colonies on the planet's surface. In Anno 2270 (one of the most economical simulators I've ever seen) you work at the scale of entire planet and its satellite, Moon. Finally, in Civilization, you deal with the planet, but scale of your operations is thousands of years (if you are lucky).
In all of these games, you get a chance to become a god.
With wisdom (or cheats) you can grow your own worlds, with the given tools. You can enjoy the prosperity of your cities, share photos of your miniature settlements, cities, transportation systems, and other things.
But what you can't do is making changes to the real world.
All of these games provide you with an immersion, however sophisticated, but an immersion.
When you are out, the world around you haven't changed.
There are still problems around the world.
So if you didn't fix your house's roof, you'd still loose warmth of your home. And if you didn't buy food, your family would starve.
So, to become a god, a real god, playing games won't help. The worlds you'll create there will be virtual.
You need to build real technology-based tools that would augment you into a god to build worlds.
With these thoughts, I've turned my head to IT industry.
From Microsoft and Google to Zet Universe
Your Potential. Our Passion.
Wow, so a big company wants to give everyone tools to realize their potentials? I'm interested.
It believes these tools should be with them, be local, be private? I'm hooked.
To me, this idea of using a PC as a technology lever to augment us, humans, looked like a perfect reason to join Microsoft.
After joining it, I've asked a few folks from Windows, Office, and MCS to help me in advocating the importance of building such lever on top of the Microsoft client and cloud platforms.
We called this idea a "Windows Semantic Platform", and we wrote a ThinkWeek paper for BillG and other senior execs to read it. Yeah, this paper outlined a vision for a system that would span across both client and cloud, but I believed that client had to be rich, and that cloud would be a pipeline to connect those clients together when needed. I believed that both sides would mirror each other.
Long story short, after spending almost 4 years at Microsoft, organizing a SIG on Context-aware Computing, doing a seminar with Gordon Bell (who authored the book on Total Recall, or your digital memory) giving a closing talk, I've left the company, and spent about a year at Google.
Google seemed to be a place where people really wanted to build this digital memory of the world, and of every single person. But what I've found was that despite of this dream, Google solely wanted to make their system work only online, without any place left for client computing.
With my understanding of importance of keeping personal data personal, I couldn't stay.
Also there, at Google, I've understood that I shall build this lever with my own hands, and thus the desire of starting Zet Universe was born.
Zet Universe
And I wanted to create tools to aid my users and me in building big things.
So, a project management aspect was essential.
Second aspect was learning. You can go just that far with your passion. When provided with the necessary knowledge, you can do practically everything (as "Mysterious Island" by Jules Verne taught us).
Zet Universe, as I look at it today, was an opportunity to build a tool that would augment a human in learning and managing projects.
In my vision of the future, every human would have a wearable computer, always there, that would:
* record everything its user sees and hears, and provide a digital memory to its user,
* be a place to organize user's thoughts and ideas in a structured form of the knowledge graph,
* be a passive tool that would aid user in her everyday activities,
* be a system that would do activities on user's behalf,
* be proactive and help user when the need for help arises,
* protect user from the unnecessary interruptions,
* be a system that would encourage user to grow in her area of choice.
All in all, I wanted to have a technical lever that would augment each human in the world, and give him or her a chance to achieve more.
And one of the most important things here, I strongly believed that such a tool should be private, should be always with its user, and not be hosted in the cloud.
I've shipped two versions of Zet Universe, going from a rough prototype (~4GB, hard to download, hard to install, hard to get up and running), to a small (20mb) installation, running easily on x86 machines.
Zet Universe became an amazing tool for organizing user's thoughts and ideas about different problem spaces.
In Zet Universe, you can now create endless project spaces, add entities from Wikipedia (using Dandelion API), add your files and folders, add web pages, and create your own entities using either built-in kinds, or after expanding it with your own ones.
With these entities, you can define connections between them (either manually or by asking Dandelion API to extract them automatically from the documents and web pages added to the project space), and capture all of the knowledge graph you've created with the graph entities within the system.
In essence, Zet Universe became a tool to build your own knowledge graph, with an ability to connect it to the bigger knowledge graph based on WikiData/Wikipedia.
Till this very day, Zet Universe is a local application that runs entirely on your PC, and with the necessary plugins, it can extract information locally (but in order to connect them with Wikipedia/WikiData, it'd require some sophisticated work), without any need to be connected to the Internet.
Afterall, your personal knowledge graph should remain private.
After spending 5.5 years on Zet Universe, however, I thought it was a good time to take a pause, and do some stuff in the industry. And so I did.
Yandex
It was a fantastic time at Yandex.
I've been busily building a continuous semantic indexing pipeline for Yandex.Mail, and working on a ToDos and Reminders scenario for Yandex's voice assistant, Alice.
While the pipeline became a backbone for making such scenarios as reminding you about your flights and changes in them, ToDos and Reminders scenario became one of the foundational ones for Yandex's Alice. This project involved ~20 teams (Alice is hosted in several Yandex's mobile and desktop apps, and ToDos and Reminders involve 1 frontend and two backends; not mentioning all of the other stuff), required a lot of cooperation across those teams, and aligning them in order to ship this scenario to the public.
During this work on ToDos and Reminders, I've realized how standardized voice assistants are these days. They share a lot with Zet Universe, in that they have a pipeline for working with user requests, they extract information (slots) from those requests, they do the requested operations (when possible).
In many ways, digital assistants are a perfect component of the earlier vision of that "human augmentation system" we've described in the Windows Semantic Platform ThinkWeek paper, and what was shown in the following it Productivity Future Vision (which I've referenced a few times in this blog before).
What's Next?
You can do a lot of stuff with tools like IFTTT and digital assistants.
My requirement for such assistant is to make sure it can work offline, on-a-chip, without any need to access Internet.
Surprisingly, there is a such assistant.
It's called Snips.
Read more about their technology here. E.g., Snips' take on NLU, their deep dive into ASR on the embedded system (also look at their paper), and so on. This is all a fascinating stuff, and I'm super excited about the work this team is doing.
One little thing I find especially cool is their approach to data generation. At Yandex, we used internal tools based on language rules to generate training texts automatically, but configuration was done by the engineer.
Snips offers the same functionality as a paid service where you can pick the number of samples, the way they are produced, and then, suddenly, you've dramatically increased the quality of your intents!
As of me, I've spent a work week trying to get their tech to work.
By the end of it, I made it.
Snips platform works perfectly on my DIY voice assistant hardware kit: