Semantic typing
Typing on a teraflop computer should not reflect that of a typewriter. Semantic typing is the act of conveying information without meticulously sequencing symbols. Instead, allowing the task at hand, the context, and your high-level decisions to speak.
Background
Why we use codes
If two people are nearby, shouting and pointing works to convey a basic message. However, our body poorly supports transmission over larger distances (both physical and temporal), which led to the invention of smoke signals and writing. These require us to learn the language of the medium (and agree on its grounding), which makes them fundamentally different from shouting and pointing. The former is easy to learn but has low bandwidth, while the opposite is true for the latter.
Modern writing relies almost exclusively on stringing together abstract characters. This method worked great when paper was expensive (bringing about the need for compactness), ink was messy (resulting in robust but complex characters), communication frequency was low (justifying long writing times), and latency was high (justifying the focus on unambiguous and context-free writing). Today's writing modalities don't suffer from the same constraints, and communication frequency and latency upend their historical numbers.
Why we can do better
Many of these changes are due to the digital era, in which most social interactions, teaching, programming, and everyday tasks still happen by stringing together abstract characters. However, the original reasons for adopting these symbols are now defunct. Their transfer is done in binary, and their presentation is disentangled from writing. Moreover, for the first time in history, we have automatic translators to and from written language. Speech-to-text and text-to-speech methods do lose some qualities; in particular, the former is hard to amend, and the latter doesn't facilitate skimming. For the same two reasons, phone and video calls that simply eliminate the middle layer are not perfect either. So how can we keep the valuable properties while dropping the archaic constraints?
One or more (depending on the writing system) of these symbols together sometimes do have a specific meaning, which we can use as a basis. For example, we can build up words by specifying meaningful features. When taking sound as the only feature, this is similar to phonetic notation. Theoretically, when taking the formality and directness scalars, you could plot out all words on a 2D canvas and type by selecting them. Instead of these bottom-up approaches of characterizing words, you can also filter them top-down. For example, one can select ontologically things > animals > small mammals > rodents > white mice
or overlap patches individual action ∧ design and materialization ∧ lasting through time ∧ for others to understand
to mean writing. The next section will look into more concrete semantics of different tasks for which we classically string together characters.
A more radical approach would be to design a new language that we write, read, search, and share fully utilizing our device's extraordinary storage and computing power. If you're interested in discussing this, please get in touch with me.
Task-dependant semantics
While semantic typing works in the general case, not taking into account how each task has different language requisites and characteristics is wasteful.
One-on-one verbal communication
This use of language is probably the most common, and we'll focus specifically on chatting, discussing its distinctive semantics.
A conversation has a history: you're expected to either stay on topic or bring up a new topic, talk coherently. That unremarkable truth - that we don't put out random grammatical sentences in a conversation -dramatically reduces your reply options.
In a one-on-one conversation, you have two people, each with an identity. A person's profile predetermines many of the answers to the other party's queries (or at least heavily tailors the distribution over the answers). For example, if you have a dog named Zoro, you're likely to answer "Zoro" to the question "What is the name of your dog", if you decide to answer properly.
Ruler is a language-model-powered accessibility keyboard I developed, making the above three aspects obvious. The expected number of keystrokes to type a word goes down from 35 to 22 when adding a frequency-sorted character-filtered word list to 4.5 when feeding in the conversation to that point to 3 when priming the model with the writer's profile too.
The first improvement makes an extra assumption, namely that words are pretty atomic. It's not a full-blown semantic keyboard and does more like a context-aware stringing together of likely words, which is an improvement nonetheless.
Displaying concepts
Writing a wiki has qualitatively different properties than the previous task. For one, it doesn't have the time dimension or causality relation a conversation has. Aimed at the future and having any possible reader, a wiki page doesn't rely on context. Instead, it makes it easy for the future reader to educate themselves on notions used in the article by providing links. The emergent structure makes it so that - in principle - you can't enter a wiki because every concept is only defined in terms of other concepts. In practice, many wikis include examples that are written as if directly to a peer.
The reader-agnostic nature creates an opportunity for semantic typing here. Many large knowledge stores behind Siri or Wikipedia cards store information in a machine-readable format. The user's client application can then show this information in whatever format, language, and level of detail best suited.
Software programming
Even though it is much more specific than the other tasks, it's an area I'm knowledgeable about, which could benefit a lot from semantic typing. Common programming languages are formal languages, which means they have exact grammar and a well-defined set of tokens. It'd be relatively straightforward on a software keyboard to type with an "inverse parser", inputting term choices and outputting characters. This style - similar to a projectional editor - would eliminate syntax errors. Of course, it is common practice to have error highlighting when typing, but we can do better.
We're dealing with a system that has solid semantics from the ground up, the lack of which makes natural language so hard for computers. We should be able to type by not only adhering to these semantics but building with them. Knowing the type information, we can offer the palette of functions matching the current type requirements. Moreover, we don't need to serve them in a plain list: we can show them in their respective namespace, suggesting arguments from the correct scope, all without losing flexibility. You're coding together with the compiler or interpreter instead of against it.
Enso is an example in this direction, which has a dual graphical and text representation and cleverly implements this on top of a regular keyboard. Another more opinionated approach that embodies semantic typing well is Lambdu. Lambdu does away with hitting characters that appear precisely on-screen, with code that compactifies if you're not working on it and takes any language or appearance you want.
I've worked on a few applications in this space, too: with MetaEditor you drag and drop expressions that export to your favorite language or framework, Mirror has a version where you type with complete statements, and in TraversalSpecify you build queries together with the computer.