Return to site

On Strong Ai, NLP & Others

An interview with Ai Expert Dmitry Korolev

· NLP,Machine Learning,Technology

On Strong Ai, NLP & Others

An interview with Ai Expert Dmitry Korolev

Given the current pace of development in data science and artificial intelligence, it takes a well-seasoned veteran in the industry to possess valuable insight into the future. This requires answering some higher-level, abstract and difficult questions such as: how should we view the relationship between an engineer and a manager? How do we realize artificial general intelligence (AGI), and should we at all? What are some problems worth solving in data science? Recently, we had the fortunate opportunity to get on the phone with Mr. Dima Korolev, currently Head of Machine Learning at FriendlyData, and enjoy an elaborate conversation on such topics.

 

Please note that all opinions expressed in this article are Mr. Korolev’s own and do not reflect the view of Rebellion Research.

 

Briefly, on Life and Career

 

Mr. Korolev graduated from the National Research Nuclear University MEPhI (Moscow Engineering Physics Institute) with a Master’s degree in Computer Science. During his career as a student, he was a competitive programmer and obtained a position in software engineering from Google as a result of his feats in Google Code Jam. Quoting from Mr. Korolev’s LinkedIn introduction, ​“Prior to 2013 I spent time in large engineering corporations. Since late 2017 I am with FriendlyData. Between 2013 and 2017 I was consulting, as an individual and via several consulting companies. In two long-term gigs, RealtyShares and Staance, I helped build the engineering team, acted as a lead and an architect, and made sure we developed and shipped high-performance data-heavy code reliably and at scale.”

 

On Engineering vs. Managing

 

As we can see from the description above, Mr. Korolev experienced a transition from an engineering to an advisor/manager. When we touched on this topic, he admits that he used to also agree with the consensus that in a tech company, engineers usually ran the show. After all, it is the software engineer that directly creates the product and implements what management imagines the product to be. However, when he entered a management position himself, he developed a new understanding of the role: product people are in fact the ones who are more connected with the “real world,” and are crucial to realizing the full potential of what engineers have created.

 

After being asked if all product managers should have some extent of engineering literacy, Mr. Korolev further details two types of product managers, depending on the type of the product. For instance,

the product manager of a team that develops anti-addiction software in video games for teenagers, “engineering background for a product owner is just a red flag; they should understand psychology more or less, as opposed to how to build algorithms.” On the other hand, if one is in a more “cutting-edge” realm such as blockchain and cryptocurrencies, Mr. Korolev raises the point that it is important to understand what is possible technologically in order to make product-related decisions. He also mentions Steve Jobs and comments on what he has achieved as something that “has nothing to do with engineers to get to envision how this propagation, how this penetration of personal computers and smartphones into people’s lives would have been all the way.”

 

On FriendlyData

 

FriendlyData, the company that Mr. Korolev currently works at, is a renowned startup that aims to make data accessible for non-technical members in a given organization. The core product is FETCH, essentially a search engine that enables its users to search for data that fits a certain description (e.g. number of active customers per year), and returns the information in visualized fashion. Thus, instead of having to be proficient in SQL queries, FriendlyData customers could use natural language to acquire the information needed for business decisions. Some features worth mentioning include the ability to “introduce new synonyms, business-specific jargon, custom calculations and domain-specific rules,” and “simple onboarding: doesn’t require training on massive datasets. FETCH only needs to know the schema of your data to get started.” (Source:​ ​https://friendlydata.io/product​)

 

Having had a look around FriendlyData’s website and watched a short demo of the technology, an immediate question came to mind: all the language commands shown in the video seemed to be in very logical forms that could be easily parsed into SQL queries; so, what about more qualitative/arbitrary questions? In other words, what is the boundary in the natural language processing (NLP) algorithm for nontechnical people to find the data they need?

 

Mr. Korolev answers that “natural language understanding is one of those domains where the boundary between what is simple and what is AGI is so fuzzy, that people often confuse the two ends of the spectrum.” While FriendlyData “absolutely supports tons of various human language constructs,” as Mr. Korolev describes, it is part of his role to “if there’s some other way to phrase a question to which we have not encountered before, to make sure we understand it and don’t break any functionality we had before.”

However, value judgements such as “high-value” would still need to be defined by the searcher – unless we somehow achieve strong AI, i.e. AGI. Mr. Korolev makes the remark that we don’t seem to be getting closer to that objective, but “we’re finding more and more bits and pieces ... where something that

looks like an AI becomes good enough.” He hopes that FriendlyData could be well positioned to be the first company to cross this boundary, once the world reaches a point where AGI is possible, though it is hard to predict for an independent, small company as such.

 

With this being said, the task of ensuring “what could be found could be found” is already a very impressive task achievable through text. Mr. Korolev envisions that FriendlyData should “stay in the realm where it’s unambiguous what the user means by asking this question.” Once we step out of this realm into more uncertainties, assumptions and biases are involved and are difficult to handle, especially given that even the unambiguity cannot yet be fully achieved.

 

On AGI:

 

We went off topic when talking about FriendlyData’s language engine, into a brief discussion on AGI, artificial general intelligence, which is also known as strong AI or full AI (​https://en.wikipedia.org/wiki/Artificial_general_intelligence​). Mr. Korolev agrees that while modern machine learning is mainly centered around convex optimization (​https://en.wikipedia.org/wiki/Convex_optimization​) and “has nothing to do with real intelligence,” he raises the point that “we cannot yet gauge how imperfect human intelligence is.” In other words, very often, human intelligence is indeed simply some form of convex optimization in a low-dimensional space. Mr. Korolev states that he “believes AGI should happen; believes that intelligence based on something that is conceptually of a higher order of convex optimization should take place, but is just not convinced crossing the boundary of what humans can do would require AGI to go that far.”

 

Moreover, the definition of an AGI really depends on the interface involved. For instance, computer Go players and chatbots nowadays have passed the Turing Test for a long time. However, if we push the boundary a bit further (for instance, an AI that can answer your grandmother’s calls for you when you don’t want to and totally pretend to be you), we might still not define what we have as AGI.

 

When it comes to the question of how dangerous AI, or AGI could be, Mr. Korolev suggests an “absolutely controversial” opinion that “there is no danger at all.” In further elaboration, he decomposes the idea of human life into two parts: technological progress and consciousness, where the latter is about enjoying ourselves and pursuing happiness. The latter could be achieved through virtual reality; once this is done, if AGI is still a threat to “the way we know human exists, then let it be.” In simpler words, if human happiness could be easily achieved at will, and AGI still threats to overhaul the conventional human lifestyle, then it is evident that humans should seek a different way of living then we know now. ​[ NOTE from Mr. Korolev: Since our chat I’ve read Life 3.0 by Max Tegmark, highly recommend it, and subscribe to the idea of the Beneficial AI as the goal the humankind is best to agree on and collectively

work towards. I’d say it’s better for all “species”, human, non-human, possibly ​extraterrestrial​, and the upcoming silicon-based ones. ]

 

At the end of the day, Mr. Korolev admits that he feels he is “nowhere near having a full picture here.” He believes that the next 10 years of his work and industry would “have to do with trying to use technology to expand human condition” and “open up more dimensions to what human creativity is about ... we just lack the means to do it so far.”

 

On His Keywords of Interest

[Note: due to complexity, technicality and variety of topics involved, only part of the keywords discussed will be documented below.]

 

In his LinkedIn introduction, Mr. Korolev says, “If the words single source of truth, live schema evolution, zero-downtime master flips, or analytical differentiation excite you, we will likely find plenty of mutual interests.” Intrigued by these terms, we talked about two of them with Mr. Korolev himself: live schema evolution and analytical differentiation. Both concepts are behind a single open-source project founded by Mr. Korolev a few years ago, Current (​http://current.ai​).

 

(1) Live Schema Evolution


Current is a real-time machine learning framework in C++ to develop and deploy data-driven

backends. To explain this project, he started with an example: usually to build a large-scale application (think Twitter, for example), you would have a few hackers to build your backend (think Node.js, PHP, MySQL) and you would build the API that reads tweets; your API would have endpoints that show tweets, etc. Eventually, you would want to build some machine learning features (think user recommendation, etc.) that go beyond the traditional capabilities of databases, since they simply cannot answer such questions.

Then, thinking about the ideal architecture for a database to fulfill these abilities, we realize that the database only holds items such as users and tweets – there is no event log (think I liked a tweet from another person, then retweeted it, and then followed the person). This is basic information often found in external data silos, such as Google analytics, according to Mr. Korolev. However, they don’t let you access the data back – they don’t let you download it; the data is simply stored for their own purposes and could not be used as your own API endpoints.

 

Thus, the idea is simple: treat the log events as the “first class citizen.” The database schema that stores the tweets and users is secondary to the event log. “You can think of this as an event log where you just interleave database mutation events (such as a MySQL database) and analytical events (such as

Google Analytics).” Current is essentially such a tool that manifests itself to the developer both as a relational database management system and as an event log the developer has.

 

This brings us to the first keyword involved: live schema evolution. It is the simple idea that past events generally stay immutable, as if the storage is a proprietary blockchain, but they can be overwritten by the updated ones in case the structure of the data warrants the change in the schema. Such a design kills two birds with one stone: on the one hand, all the events are stored for as long as the user needs them to be (the “liked a tweet and then followed the author” example above), and, on the other hand, those events are always kept in an up-to-date format, with no siloing of any sort, freeing the developer from the ever growing pain of having to support multiple data schema versions during the runtime of the application.

 

(2) Analytical Differentiation


Analytical differentiation, in very rough terms according to Mr. Korolev, is the idea that, “if you

have convex optimization, eventually you’re going to need a gradient to make those small steps to converge towards the minimum; the more complex the cost function is, the trickier it is to derive the gradient manually.” At the same time, closed-form solution for the gradient is simple to come up with for a computer, so there are tons of frameworks to do that, and Current is one of those.

 

Essentially, a function takes in an array of data, and is not only able to compute the gradient analytically, but could also “pilot” the gradient into the function to compute all its derivatives, as Mr. Korolev describes. He mentions that while working at Microsoft, he developed such a tool to introduce new forms of more complex regularizations for regression models. Later he realized that such a tool could be built again in open source, which became the cornerstone pieces of today’s Current. The idea here is you only focus on defining the cost function of the machine learning model, and the computer does the rest for you – including computing the all the derivatives of this cost function.

 

Why is this important? Analytical differentiation, being a mathematical concept, is a term mostly spoken by data scientists. Since “data scientists speak mathematics and engineers speak infrastructure,” Mr. Korolev points out that while data scientists often work on the best cost functions to use in their models, they many often find it doesn’t work as expected in production. An important philosophy behind Current is that by giving data scientists and engineers the same tool to achieve tasks such as analytical differentiation, as long as they are working on the same defined cost function, we would have greater hope of eliminating potential misunderstandings within product teams and greatly enhance efficiency.

At the end of the day, Mr. Korolev asks the important question, “what are the procedures that help bring machine learning teams up to speed with respect to making sure they deliver value for the business?” These keywords are certainly focused on this question. While there are no silver bullets nor “best practices,” we must continue to discover means that bring us closer to this objective.

 

On Automation vs. Manual Control

 

Machine learning nowadays is often the means to automation. More often than not, people fear the power of AI due to concerns of automated roles taking over human jobs.

Mr. Korolev first raises the point that many systems are meant to behave in unpredictable ways; in a rather extreme example, one of his university peers wrote a thesis on creating a neural network to control a nuclear power plant – “what are you going to do if it misbehaves?”, asks Mr. Korolev. “go pray?”

He then quotes a personal experience. When Mr. Korolev was an engineer at a large tech corporation, his team refused to utilize advanced machine learning techniques to create page rankings. They would have lengthy meetings dedicated to figuring out why exactly two search results were flipped in order, which Mr. Korolev then considered absurd and unnecessary. However, as he grew older, he realized even if there were such black-boxed algorithms that magically returned increasingly better rankings on the fly, there would be other black-boxed algorithms that could reverse-engineer your black-box and control your results. When you notice that the supposedly all-magnificent tool no longer works, it is already too late.

 

“The truth is somewhere in the middle,” says Mr. Korolev. “In general, when the cost of mistake is high, manual control is more effective,” he concludes. This is an intuitive answer, as when the risk to downside is significantly higher, or if the downside is an unacceptable outcome, we would wish that we know what led to that result in order to turn things around as soon as possible. Moreover, Mr. Korolev adds the important comment that mathematicians and engineers must be aware of what the general population thinks, i.e. “external human factors,” instead of merely pushing what seems mature in the lab into the real world.

 

This question is of great interest to Rebellion Research. Being a robo investment advisor, our firm clearly stands on one far end of the automation vs. manual spectrum. When asked about his opinion on automation in the financial services industry, Mr. Korolev was very succinct: “I strongly disagree.”

Having done his master’s thesis on predicting currency exchange rates using machine learning, Mr. Korolev once considered a career in quantitative trading/investing, but every time he talked with

friends in the finance industry, “something has stopped me.” What caused him to hesitate was the fact that “modern market is very close to chaos theory.” (​https://en.wikipedia.org/wiki/Chaos_theory​)

 

In other words, there are too many adjuncts, players, and uncertainties in the market today, such that “predicting something with a certain degree of confidence is, I wouldn’t say impossible; it’s just too hard.” Mr. Korolev not only mentions the widespread ignorance of traders towards external factors in the market, but also the increasing complexity of modern financial instruments, many of which tailor-made by large financial institutions for their clients, thus not publicly transparent but nonetheless very impactful. “We must understand the financial market in ... some way. If we don’t regulate them to prevent third, fourth, or fifth-order instruments from emerging, it’ll be a disaster.” In an industry where real human careers, fortunes, and lives are at stake, this seems more like a high-risk situation (like Mr. Korolev mentioned earlier) where more manual intervention would be beneficial.

 

So, at the end of the day, is there any possibility to employ AI/machine learning to understand this very complex market? “Yes, of course,” says Mr. Korolev, “but the real question is do we need to?” In comparison to struggling to understand the current market, it seems like a much better use to utilize technology to figure out: how should we change the rules of the game to win, especially with so many AI/ML players already saturating the space? In other words, it makes more sense to right a wrong, than to employ increasingly complicated methods to survive in this wronged environment.

 

At this point, we were running short of time and had to conclude the interview. We would like to hereby express our thanks to Mr. Korolev for taking the time to talk, and we believe that these contents could be truly horizon-expanding and inspiring for our readers to digest.

Written by Eddie Shen, Edited by Alexander Fleiss