Letterform through lexicon

Letterform through lexicon

Abstract

Introduction

Sorting letters

Breaking taxonomies

Expressing ideas

Thinking slow

Bibliography

Thank you,

Extras

Abstract

Formless words do not exist in writing. Words must be perceived through the visual intermediary, the letter-form. But where exactly do these letter-forms come from, why do they look the way they do? What are they? It’s a fundamentally metaphysical question that can profoundly inform a practice.

While many theories exist of how different parts of roman type came to be, the prevailing view of what type seems to be informed by a completely different resource; taxonomies, or, systems of classification. This text explores the use of systems of classification, design tools, and other methods of seeing and describing type through the lens of the vocabularies which they offer. It aims to push the idea that a school of design, a distinctive style, can only be achieved by changing the categories in which one thinks.

Introduction

This thesis is not about taxonomy, or, what some would call type classification. Instead what I’m interested in discussing is how the words generated by taxonomies — and other typographic terminology — impact the perception, selection, and making of type.

When I speak of ‘typeface design’, I speak of specifically formal latin typeface design — a lot of display type is purposefully unclassifiable (as in, hard to describe in consistent terms) and a lot of the nuance I intend on discussing is most important when broad gestures are restricted by reading conventions. When I speak of a ‘taxonomy’, I’m referring to many different things that might not self-describe as such. In the text, I compare different ideas as though they are taxonomies even if they clearly aren’t; my interest is not in the rigorous application of categories, but rather the linguistics (and ideas) themselves in relation to the visual craft. And, in similar spirit, the field of typeface design does not follow any strict rigour in its categories.

This looseness might be the reason why typographic terminology is so ill-defined. It feels purposefully adversarial. When we talk about type, we gesture to a semiotic mess — a web of signifiers and references that sometimes work — with diverging categories that conflate distinct ideas needlessly.Tracy, W. (2003). Letters of credit: A view of type design (p. 13)

What emerges as a common way of understanding type is a sort of folk taxonomy, an amalgamation of different categories representing different systems, modified and rethought through decades, growing more imprecise each time a new meaning is needed.For example, the ‘slab serif’ archetype, being relatively recent, is deeply under-explored, not to speak of monospaced fonts.

Creating a solution is, as I hope to make clear, impossible. However, looking at the variety of taxonomies can be a good exercise in seeing the same objects in different ways. My intent, then, is the exact opposite of solving the issues of incomplete taxonomies; I’d like to complicate further the lexicon of typography and see what happens.

My hypothesis is that the only way to innovate in the field of typeface design is to innovate on the methods of describing type, and through critical consideration of the categories in which one thinks. It’s an investigation of vocabulary as a pedagogical and design tool in case of type and, more broadly, the metaphysics of typography. The core argument, in short, is that typeface design is a process which fundamentally happens within a context of constructed traditions, so positioning a typeface in some context happens by necessity. The way we describe typefaces implies different contexts which affect which features are perceived in relation to one another — and we should use this act of describing as a tool of putting attention to features that would otherwise be neglected, creating something within conventions in core ways, but outside of them in details — the only place we can meaningfully alter convention while keeping the type readable.

The chapters that follow do not create other arguments; they serve to explain and contextualise what I mean, elaborate on the argument, and introduce examples. If my argumentative musings are successful, at the end the reader will appreciate terminology as a valuable tool for understanding typography on different levels, a skill that should be developed and extended much like any other (e.g. the ability to draw, see detail, understand technology, etc.), and a form of conditioning that, if unexamined, creates bias that one might be unaware of, be it desirable or not.

Sorting letters

I will first explain a number of ‘taxonomies’, then look into the blind-spots they create. For the purposes of this text, I will focus on classifications derived from Vox-AtypI and Gerrit Noordzij’s theory of The Stroke, but I will also touch on other systems of classifying, understanding, and designing type; a system proposed by Indra Kupferschrift, LetterModeller (LeMo), and CEDARS+.

The Vox-AtypI classification is sometimes referred to as ‘the type classification’ and is many things. The original classification splits formal latin types into 3 umbrella terms and their subcategories. The first is the Classicals, containing Venetian, Garalde, and Transitional types. Following it is Moderns, containing Didones, Mechanistic (or slab-serif), and Linéales (or sans-serif), and the last main category is Calligraphics, containing Glyphic, Script, Graphic, Blackletter, and Gaelic subcategories.Devroye, L. Vox type classification. Devroye.org. https://luc.devroye.org/fonts-32669.html

In the modern day the category of Linéales is rarely alluded to by that name, instead the words for sans-serifs in use come from the 1967 British Standard (based on Vox’s classification), which divides them further into more categories with more familiar names; Grotesque, Neo-grotesque, Geometric, and Humanist.Devroye, L. British Standards for Type Classification. Retrieved 30 January 2025, from https://luc.devroye.org/britishstandards.html

When I speak of a ‘folk taxonomy’ of typefaces, this is what I’m referring to; most imbued in the contemporary lexicon is a combination of Vox-AtypI and the 1967 British Standard, dividing the typefaces into ‘Serif’ and ‘Sans-serif’ first. Inside the Serif category, Transitional and Didone types sit aside ‘Oldstyle’ (or ‘Humanist’) serifs, a combination of Venetians and Garaldes. The sans-serif category is divided primarily into the British Standard set of Grotesque, Neo-grotesque, Geometric, and Humanist. The exact borders of these categories are always changing, being iterated on by many foundries for their website search, in speech, in writing.

The model of The StrokeNoordzij, G. (2019). The stroke: Theory of writing.

differs by being parametric. On the most basic level, it distinguishes between two categories; with two named categories in each. The contrast is divided into Translation and Expansion, and the construction is divided into Running and Interrupted. While only having four categories seems limiting, the critical component is that the contrast types present two extremes that can be interpolated between and the amount of contrast is variable. Almost every latin text typeface can be assigned a spot on the Noordzij Cube, shown below.

Figure 1: On the left, basic representation of Gerrit Noordzij's early theory, The Stroke of the Pen. On the right, a Noordzij Cube — the parametric model. Both are drawings of Gerrit Noordzij from the 2018 edition of The Stroke.

In Type classifications are useful, but the common ones are not, Indra Kupferschrift proposes a system based in some capacity on the ideas of The Stroke, but extended into three stages; the ‘bone’ — the underlying structure or model, the ‘flesh’ — features like serifs, application of contrast, and ‘skin’ — compromising the details like the specifics of the serifs. Thanks to its modularity, it translates well into a system of tags that allows for variable specificity in search, sorting, and describing type.

Figure 2: Kupferschrift’s 3-layered system still cannot be used to classify Rotis but either way is probably one of the better approaches devised thus far for being usable and not over-complicated. Sourced from kupferschrift.de

Dr. Frank E. Blokland’s LetterModeller (LeMo) was made as a tool of analysis of Renaissance roman type, but is also used as a tool of type design in KABK LetterStudio. It focuses on the pattern created by humanist roman type — the fence-like system of arches and stems. Its value is in providing a framework, reasoning, and words for abstracting type into a model. The process of designing type with LeMo is essentially the reversal of that; starting with the model, and de-abstractingI’m opting to introduce the term de-abstraction which represents more or less the same idea as formalisation because it is a more intuitive description of what is happening in my use of the word. While formalising puts emphasis at the final result of creating a typeface — a formal set of letters meant for reading — by referring to the process as de-abstraction, I try to put emphasis on the process of taking an abstract model and ‘fleshing it out’, designing a typeface inside-out.

into a formal roman typeface.

Figure 3: Layers of translation between model and 'idiom' in LetterModeller. Sourced from lettermodel.org

Dr. Nadine Chahine’s CEDARS+ organises typefaces by precise categories (e.g. contrast amount) alongside slightly more ‘vibes-based’ ones (‘energy’) which can be intuited in some contexts but are difficult to measure consistently across styles. To accommodate for various other features that might be searched for, the ‘+’ in CEDARS+ stands for miscellaneous features that might vary per script. This tool is deliberately and clearly made for search.I think it succeeds as a tool of search but fails as a tool of describing type because the way it chooses present contrast by default, vertically, is what Noordzij would call Expansion. Because pointed nib writing is a rather recent phenomenon (ca. 17th century) and many scripts evolved with broad-pen-like contrast, e.g. Hebrew, Arabic, many Brahmic scripts, etc (maybe because broad-edged writing tools are easier to make), delegating the ‘axis’ as a separate category that has to be fulfilled alienates. I think the beauty of Noordzij’s approach is that it chooses to separate between contrast type first, which is a more structurally integral part of the letter than exactly how much contrast there is. But type is often searched by the amount of contrast.

Figure 4: Visual representations of Contrast, Energy, Details, and Structure in CEDARS+. Sourced from I Love Typography

These broadly represent the two approaches commonly taken to describe type; systems like Vox-AtypI and its derivatives represent the retrospective approach to describing type, where existing material is sorted based on printing/design tradition and appearance. The Stroke, LeMo, CEDARS+, etc, represent the analytical approach, which abstracts type into a set of parameters — based on its outwards appearance, like in CEDARS+, or on its underlying structure, like The Stroke or LeMo.

This collection of ideas as things to be compared is — yes — ridiculous. There are two taxonomies, one theory of writing, one theory of structure, and one search tool, all of which represent fundamentally distinct approaches. It’s also precisely the absurdity of the selection that makes the comparison meaningful. Vocabulary and constructed categories become their equalisers; what interests me in all of these is what they choose to name and how they choose to name it. The visual differences in typefaces come down to the contours— we can see that. But what we notice and the order in which we notice it, is another matter entirely. It is up to the system of abstraction to create category lines and lump typefaces together, to consider which qualities are together and which are not. The way in which we relate and understand features to one another is a matter of abstractions — into names like ‘serif’ or ‘terminal’, into calligraphy strokes, into traditions. Every way of looking can interact with every other one, just as how it is possible to make a painting informed by conflicting theories of colour, applied selectively.

Breaking taxonomies

[…] Could a law not be found that would account for the successive or simultaneous emergence of disparate concepts? Could a system of occurrence not be found between them that was not a logical systematicity? Rather than wishing to replace concepts in a virtual deductive edifice, one would have to describe the organization of the field of statements where they appeared and circulated.
(Foucault Michel. Archeology of Knowledge. p. 56)

The entire project of classifying type was thought to be unreasonable until the middle of the 20th century when it had to, due to high volumes of new types, be done.Kupferschrift, I. Type classifications are useful, but the common ones are not.

Immediately, the idea of universality across scripts appears impossible; with an endless range writing traditions — alphabets, abugidas, syllabics, abjads, logographic scripts, written with a variety of tools and in a variety of contextse.g the history of Brahmic scripts with palm leaf manuscripts, the history of CJK(+) scripts with the pointed brush, the history of latin with the flat brush, square pen, and pointed nib, of Baybayin with knife-carving, etc.

— the task of classifying all type is like the task of classifying any set of white and black shapes.And even the reduction of type to black and white shapes is a major assumption.

Even within just the latin scripts there are fundamental, unsolvable problems. Typefaces that follow the same morphology could be interpolated between,In the cases of bézier curves, identical point structure is needed to interpolate. But that’s a technical restriction of a medium — if two shapes have the same topology, they can be interpolated between, i.e. in-between states can be created. This is possible for typeface design also, e.g. using Ikarus splines and clever algorithms rather than béziers.

so there can’t be a starting point and arbitrary distinctions have to be made which will cause the system to fail more over time.

The issue of interpolation is one reason why retrospective systems fail. The failings of analytical systems have to do with the subjectivity of the parameters they choose to measure; the act of measuring is not neutral; the assumption that something is worth measuring is in itself a subjective social construction.Ásta Kristjana Sveinsdóttir. (2018). Categories we live by: The construction of sex, gender, race, and other social categories.

No number of measurements,Measurements of features, e.g. stem width, serif length, contrast amount, etc.

then, creates an objective categorisation or designation. Even if we agree that typefaces are built on traditions, what we extract as the essence or key characteristics of these traditions is a constructed selection. As such, every typeface is its own record of qualities and decisions that, when judged through any given lens, is reinterpreted beyond its white and black shapes.

Discussing taxonomies, then, always is contextual and subjective, based on their purpose and utility. Their epistemic integrity is unimportant beyond that purpose. And their completeness is also unimportant; adopting any taxonomy ensures that it will be broken. The moment any category lines are delineated, typeface designers (a people hopelessly contrarian in nature)Your mileage may vary

will blur them, as they had done before, intentionally or not.

Figure 5: Rotis, designed by Otl Aicher. A supremely unclassifiable typeface, each entry in the family stranger than the last, regardless of the order in which they come. Specimen image sourced from Wikipedia.

The contrarian impulse is arguably exactly the thing that, every now and then, sparks innovative ideas in typeface design. For that reason I can’t argue against the creation of new systems of classification and ways of thinking around typography — although it is a sisyphian task to make a functioning taxonomy, creating new category lines which one might want to blur is an end all on its own.

For the reader interested in breaking a typographic taxonomy at home, the following approaches are easy and convenient; for an analytical taxonomy, find a parameter which it doesn’t measure and make it a pivotal aspect of the design. For a retrospective taxonomy, interpolate between categories that don’t have an in-between such that its placement is ambiguous, or create something outside of existing categories.

The reason for establishing these breaking points is not to be pedantic, but rather to make it clear that the act of sorting and depicting letters through abstraction is inherently interpretative.

Expressing ideas

We ‘feel free’ because we lack the very language to articulate our unfreedom.
(Žižek Slavoj. Welcome to the Desert of the Real! p. 2)

Knowing that categorisations of type are always flawed, I’d like to now engage in some speculation as for how that can be used. What I intend will hopefully become apparent once I point out what exactly this text is;

What I am writing here is a ‘thesis’. However, what if we think of thesis writing as an act that must take between 6 months and 3 years?This claim is from Umberto Eco’s How to write a thesis (p. 17) and I’m obsessed with how it blends a ‘hard’ claim with ‘soft’ rules. It’s an unconventional way of thinking about the subject which changes how one might approach the writing of a thesis.

Then what I have written doesn’t qualify.Depending on definitions.. does ruminating on a theme occasionally count as research?

If I argue that the format is decided by the word count, perhaps I’d written a short essay, or by the location in which the text lies — in which case this is a blogpost. In all cases I am talking about the same document in which I intend to express a simple idea, but by choosing to describe the first section of the document as a blurb for my blogpost rather than an abstract for my thesis, I am creating a completely different expectation for myself. So I am writing different footnotes, quoting different sources in different contexts. The simple designation of something into a certain mental bracket makes me re-consider it with different expectations. I cannot avoid the issue by not designating any quality to any category I know of or refusing to think about it — contextualising it in my existing body of knowledge is metaphysically necessary because what concerns the field itself is a social construction.The generalised argument would probably be; ‘in all possible worlds, given a practice that is socially constructed, that practice by necessity refers/defers in some way to the construct on which it is based.’ A ‘yellow is yellow’ argument if ever there was one.

The idea that linguistic categories shape our views of reality is not new. These ideas are commonly discussed in, for example, metaphysics of identity, philosophy of linguistics, and cognitive science. In Philosophical Investigations Ludwig Wittgenstein insinuates that a language is trained, and is a tool which enables or hinders certain types of thought.Wittgenstein, L. (1968). Philosophical investigations (§1–47)

There are named and studied phenomena, such as Categorical Perception,Harnad, S. (2003). Categorical perception.

which specifically relate to how our constructed categories (thus taxonomies) relate to our ability to discern between conceptsThe evidence on this is tenuous; a 2022 review argues that there is no strong evidence for Categorical Perception in speech perception. However I use this

and some of these effects have been (tenuously) documented in graphic designers.Dyson, M. C. (2011). Do designers show categorical perception of typefaces?
No such studies on typeface designers specifically have been conducted to my knowledge.

Among typeface designers there exists a maxim; ‘everything is done by eye’.Hochuli, J. (2008). Detail in typography (p. 15)

Unfortunately, in my anecdotal experience, it is sometimes invoked as a sort of ward against discourse on our conditioning described above — for good reason; even when we know it happens, it doesn’t seem closely related to our practice. At the very least it feels abstract and theoretical rather than practical.It could be that some think of their typographic practice as a medium for language but not a linguistic subject in itself.

If we agree that the practice, through the process of testing and iterating, focuses on a sort of affect,O’Sullivan, S. (2001). The aesthetics of affect: Thinking art beyond representation

a specific visual feel of words or paragraphs, rather than any specific categorical mapping, we should be focused on that. The decisions then are all in the servitude of a certain affect, and thus are or should be reliant on sight primarily, if not exclusively.

Although I do disagree with this version of the idea, I think the maxim itself is a good starting point that we can expand on to make it a practical tool. The argument I presented is not in itself wrong — its flaws come from not contextualising it and thinking about it further. As I discussed in Breaking taxonomies, what we are testing for, aiming for, and where those preferences come from, is learned. Even in the cases of final adjustments in a typeface, which are often divorced from any system, the qualities the designer notices and can name as an issueSome of my own experience on this; I can often notice there is a ‘problem’ in texture, some quality which I do not like, but cannot name. I think that the most growth I’ve experienced in the last year of studying and practicing typeface design is in being able to name exactly the issue rather than just identifying that one exists.

are dependent on what they know to look out for, what they are trained to see as an issue, etcetera. A continuation of the maxim follows; ‘everything is done by eye, but you can only see what you know.’You can only see what you know” is a thought I’m borrowing from Frank Blokland’s teaching of type design.

If that’s the case, we should investigate the ways in which this happens. I would argue that this is the main utility of taxonomies, and where they intersect with proper forms of analysis and understanding type. While the former claims to simply assign letters into groups of some sort, the latter is expressly intent on changing one’s approach to typeface design — and therefore, no doubt, their output on some level. Where they interact is in creation of categories for considering aspects of design. The former ultimately fails at its intended purpose anyway; upcycling the categories only makes sense.

We can first look at how differences between how similar-spanning categories are constructed introduces new ways of conceptualising the design of type. The distinction between translation serif, humanist serif, and oldstyle serif creates different expectations for what exactly we’re referring to, even if it captures more or less the same range of typefaces. If oldstyle implies a certain period of punchcutting, such as Claude Garamont, Francesco Griffo, or Nicolaus Jenson, humanist or translation can imply a variety of typefaces using similar proportions (or patternsBlokland, F. E. (2016). On the origin of patterning in movable Latin type

) such as Arno or Palatino. Both translation and humanist require the term ‘serif’ — establishing some sort of semantic connection with similarly proportioned sans typefaces — but translation also doesn’t refer to a typeface as a model or skeleton, but rather a tool, which adds a new layer of de-abstraction that is necessary between the model and the typeface. It’s perfectly feasible to imagine ‘humanist’ being a category containing a variety of typefaces, but translation is the mechanism that leads the pen to create those structures to begin with — the order in which structure and contrast happen becomes reversed.

Figure 6: Six serif typefaces and two models with humanist proportions.

When working with LetterModeller — which is one framework of abstracting text proportions into a skeleton or model and translation contrast — the base models cover both venetian and garalde proportions (they are more or less the same.)Blokland, F. E. (2016). On the origin of patterning in movable Latin type. (pp. 221–225)
Olocco, R. (2024). The Jenson Roman. (pp. 92–101)

These proportions do not map exactly onto every serif text typeface that uses translation contrast, but they map quite closely, as seen above. What changes in this approach is that the humanist proportions are abstracted into models of some sort. So then, when we work, we are forced to engage in de-abstraction — a conscious pursuit of taking existing qualities provided by the model, and fill the empty spaces this particular model left for details.

Figure 7: In this font I put together (rather hastily) based on the geometric LeMo model, the stems are distributed in an even rhythm and the round shapes are treated as horizontal ‘overshoots’ of the straight stem. This translates to serifs which are heavy on the bottom right which are needed for the n, h, i, m, etc to counter-balance the larger amounts of black area on the left side of the letter on the x-height compared to the right. This grid of stems is made finer, and all the other letters are proportioned based on it.

Maybe ‘you can only see what you know’ can be taken further; just like I can choose to perceive this thesis as a blogpost based on arbitrary conditions which I’ve delineated, I can choose to perceive any given feature of a typeface selectively. In essence I suggest to take advantage of the incredible human ability to self-deceive and be deluded, and of the ability to hold many conflicting thoughts in one’s mind at the same time, to let ideas interact in ways in which they otherwise wouldn’t. The maxim expands again into its final form; ‘Everything is done by eye but you can only see what you know, and how you know it is a matter of the imagination.’

To demonstrate with a more pragmatic, obvious example, let’s consider the following sentences;
Typeface design is all about consistency.
Typeface design is all about the distribution of volume.Something a stone-carver might say.


Typeface design is all about the relation of contours.No source here; this came from the heart. I often liken the making of contours to musical counterpoint, where two distinctive lines following the same thread interact across the width of the ‘writing tool’, or in digital terms, the contrast angle.


Typeface design is all about the context of printing traditions.Tempting thought when considering only Vox-AtypI when designing a text face.


Typeface design is all about rhythm.The importance of the rhythm of stems is emphasised in Dr. Blokland’s On the origin of patterning in movable Latin type


Typeface design is all about creating something invisible.Morison, S. (1996). First principles of typography. (p. 6)


Typeface design is all about the white of the word.Noordzij, G. (2019). The stroke: Theory of writing (p. 16)

All of these sentences can be seen as true, even simultaneously. At the same time, the normal practice of typeface design can choose to exclude any of them, and has to prioritise them in some order. Drawing letters with any one of them in mind over others would produce different outcomes.

Let us select three of those sentences in a specific order and use them to construct a hypothetical process to illustrate how they affect the shapes and process. The first priority will be the rhythm — more precisely, the stem rhythm — so we might end up with something Jensonian to balance the distribution of black shapes in the ‹n›, ‹m›, ‹h›, etc. Now, if we add on the consideration of the flow of volume, perhaps that requires some sort of stem concaveness that blends them into the serifs.To get closer to G2 continuity, which is impossible when connecting a straight line to a curved one.

At this point in order to start prioritising consistency we have to be selective about what we’re making consistent; otherwise we risk undoing the concave Jensonian text we already worked on. If we were to make the serifs even in length now, the spacing we need for an even stem rhythm will create too much whitespace at the end of shoulder-adjacent stems. If we want to make serifs consistent between diagonals and stems now, we have to guesstimate the flow of curvature or come up with elaborate algorithms which either way have to be engineered to conform to our visual, guesstimated, preferences.

By choosing to prioritise the rhythm and flow of volume first, we’ve engineered a process that is more intuitive and, visually speaking, less structured. We can become much more cognisant of which of those ideas we choose to include or exclude if we let them interact — and if we let ourselves watch them interact. The sentences I provided are quite general, but granularity can be increased with deeper theory.

Also interesting is that — again — the necessary act of positioning a typeface, or more broadly thinking about it as something that co-exists with other typefaces, predisposes one for thinking in some selection of such categories. Everything in typography is a detail, so these small distinctions are essentially identifying features contingent on the words that can actually differentiate one typeface from another — which can become inexpressible and therefore, in some way, invisible, especially to the untrained eye but to everyone in one way or another. We can, in a very literal sense, lose the freedom to perceive something because we do not possess the metal category to consider it. For example, someone who has never heard of LeMo might be very pleased about a texture of a text font with an even stem rhythm for the characters ‹b, d, h, I, j, l, m, n, o, p, q› but might not be able to offer an explanation, or even observe the rhythm. In trying to recreate it by eye, they would likely fail to capture the exact quality they could not have described.

Letting these vocabularies interact is something that, to some extent, happens already. New terminologies, new taxonomies, new ideas emerge all the time. As typefaces need to be described more granularly, more granular words are used. Although cliché, the infamous Stanley Morison quote still applies;

The good type-designer therefore realizes that, for a new fount to be successful, it has to be so good that only very few recognize its novelty.
(Morison Stanley. First principles of typography. p. 6)

Like many processes which happen within the field of typeface design, this can happen intuitively as much as it can happen when controlled. It always comes back to the eye; if taking one approach produces outputs the designer doesn’t like, they will pick a different one. But perhaps the range of approaches overall becomes more homogenous when many decisions are made based on unchecked intuition, on visual study but not theoretical one — it seems to me that despite all the new adjectives which enter our already complicated lexicon, the Vox-AtypI-esque basis under them is barely moved. Perhaps the way forward for the typographic canon is more hypotheses, more loose metaphysical thinking, and more theories; some rational, some unequivocally deluded. Whether or not the theory is true essentially doesn’t matter.

When the final output is a bézier path or a drawing or painting or stone-carving, virtually no amount of irrationality in the process can prevent the final lines and areas from being good. This can be both a critique of alternative systems, or the standards. Our typographic canon, our ways of thinking about type, could be completely wrong and irrational, and we would be prevented from correcting them because of the nature of our craft; anything could be made to work, even completely misguided ideas can work unreasonably well. We work, at the last stages, with our eyes. Whether we start with conventions, habits, or alternative systems of thought (which often can produce similar output that’s expected from conventionsI think a good example of that is On the origin of patterning in movable Latin type — of course, to some extent it is speculation, because no records remain from the Jenson roman or the Eusebius type — the foundational roman font/fount. But interestingly, even recent books like Riccardo Olocco’s The Jenson Roman do not mention it. This typographic canon, of everything being created by eye, influenced by tools and medium perhaps but not also a restrictive system is deeply ingrained, whether that is intentionally or through the lack of availability and popularity of ‘alternative’ knowledge — there is absolutely no direct historical evidence for assuming there wasn’t an underlying system the same way there isn’t for assuming there was. The Eusebius Type was, in essence, a foundational model — if we consider the option that it was later copied rather than reinvented, considering later writings on type as having explanatory value for the original model is like engaging in a twisted, typographic version of the watchmaker argument (which was refuted by Hume in the early 19th century). There is a critical assumption at its core that is fundamentally unprovable. So then, why aren’t more explanatory hypotheses employed more? I would risk saying the only hypotheses that are accepted as credible tend to be ones that in some way already agree with the ‘done by eye’ canon, like counter-punching (which is a process, but not a restrictive system). Whether or not it’s projection of contemporary practice onto that of the past is outside the scope of this text.

anyway) we will bring it to a state which satisfies us aesthetically in the end. If that is so, the critique is actually of commitment to a singular metaphysical framework of considering type. They are all, probably, wrong, they have to be. But they are also, in different ways, useful.

We end in a place in which we are doomed to succeed. Regardless of the approach we take, we can create something that works because we can trust our eyes with the final judgement given enough experience. It seems to often be an intuitive process, but in so being, it takes away the utility of deeply systematic, analytical thinking, and the liberation that comes with conscious yet committed delusion.

Thinking slow

Type design takes time — at its fastest can be a full day, at its slowest it takes about thirty years.To my knowledge of existing projects — I suppose anything can take about as long as it wishes to given the right circumstances. I’m also keeping a narrow idea of what type design is. For example, I’m not considering companies like Dai Nippon Printing — which has been maintaining a type family for over a century. I’m referring to the digital process of making a file and declaring some version of it ‘finished’ or ‘publishable’.

When making a typeface, all kinds of ideas interact, from our ways of thinking to our aesthetic impulses to the things we do because they are convenient or due to technical limitations. But, as I see it, there is a kind of rashness to it, a reliance on the eye to fix the issues of our metaphysical gaps without addressing them. Meanwhile we can apply different ideas where they’re necessary and, in the same way, gratuitously where they’re just interesting. It’s always in the realm of possibilities to be systematically sloppy, metaphorically or physically. To consider what exactly makes one’s practice, and to treat that thought process as a part of the practice itself. We can apply bizarre principles with full conviction. Hold for applause — yes — we can be delusional.

When I say that the only way to innovate in the field of typeface design is through new terms, new meanings, new ways of understanding, this is what I insinuate. Typographic innovation is a matter of small changes over long time. Every typeface deviates from the convention — the convention is just a placeholder, an ideal construct that doesn’t actually exist in our universe just like the concept of a circle — but the hallmark of an innovative typeface to me, is that the deviation is principled, and the inconsistency — consistent. Innovation is a term that I hadn’t defined on purpose; I believe it’s an unknowable process before it happens, each time different. But I think by definition it starts with strangeness, so strangeness is what I wanted to argue for.

To end, let’s look at the beginnings of this idea. I noticed that the typographic canon seems to be stuck in modernist philosophy when it comes to describing type, but a romanticism in its making that could stem from retrospective thought. Having recently thought about modern feminist literature on phantasms, I had to wonder if that isn’t caused, at least in part, by generating a certain specific narrow worldview that just so happened to intersect with the main set of words with which communication occurs in the field; a retrospective system devised in the 1950’s and 60’s. Having been taught formally to recognise type through Noordzij’s ideas among other methods,The LeMo Method, calligraphy & practical broad pen contrast research, reviving type, through stone-carving, drawing and drafting.

and previously having been a self-taught amateur type designer who knew nothing beyond the family of Vox-AtypI taxonomies, I could recognise the difference in my understanding that came from just having the words to describe things I was seeing — and seeing for the first time the things which I had new descriptive words for. While I’m using the word speculation to describe what I’m doing, I think the approach I’m taking is a rather pragmatic one; through an adoption of conceptual tools as design tools, I reject the separation of thinking and making.

This text does not carry some kind of innovative idea — when writing it, I continuously felt that the point I was alluding to was rather banal. It seems so simple — perhaps it’s implied for most people? After all, there is so much recent literature in the humanities about self-deception, about the failures of categories, about the biases they create. But how exactly this process happens, what it means to use it happening (rather than learn to change thinking where different rules apply, to change it dynamically as a means to achieving different ends), is just as important to me as a typeface designer as the anthropology of it is to me as a person. Thus even if it was just for myself, I tried to put it in words. So then, maybe I didn’t write a thesis nor a blogpost nor an essay, but a PSA.

Bibliography

Ásta Kristjana Sveinsdóttir. (2018). Categories we live by: The construction of sex, gender, race, and other social categories. Oxford university press.

Blokland, F. E. (2016). On the origin of patterning in movable Latin type: Renaissance standardisation, systematisation, and unitisation of textura and roman type [Leiden University]. https://hdl.handle.net/1887/43556

Blokland, F. E. (n.d.). Sum of Particles. Retrieved 5 February 2025, from https://www.lettermodel.org/sum_particles.html

Devroye, L. (2025). Vox type classification. Devroye.org. https://luc.devroye.org/fonts-32669.html

Devroye, L. (n.d.). British Standards for Type Classification. Retrieved 30 January 2025, from https://luc.devroye.org/britishstandards.html

Dyson, M. C. (2011). Do designers show categorical perception of typefaces? Visible Language, 45(3), 193.

Foucault, M. (2012). The Archaeology of Knowledge. Knopf Doubleday Publishing Group.

Harnad, S. (2003). Categorical perception.

Hochuli, J. (2008). Detail in typography. Hyphen Press.

Kupferschrift, I. (2012, March 31). Type classifications are useful, but the common ones are not. https://kupferschrift.de/cms/2012/03/on-classifications/

McMurray, B. (2022). The myth of categorical perception. The Journal of the Acoustical Society of America, 152(6), 3819–3842. https://doi.org/10.1121/10.0016614

Morison, S. (1996). First principles of typography (New ed.). Academic Press.

Noordzij, G. (2019). The stroke: Theory of writing (P. Enneson, Trans.). Uitgeverij de Buitenkant.

Olocco, R. (2024). The Jenson Roman. Lazy Dog Press.

O’Sullivan, S. (2001). The aesthetics of affect: Thinking art beyond representation. Angelaki, 6(3), 125–135. https://doi.org/10.1080/09697250120087987

Tracy, W. (2003). Letters of credit: A view of type design. Fraser.

Wittgenstein, L. (1968). Philosophical investigations. Basil Blackwell.

Žižek, S. (2002). Welcome to the desert of the real! Five essays on 11 September and related dates. Verso.

Thank you,

Classmates who contributed ideas and feedback;
Jakob Wilke
Angèle Jaspers

My thesis supervisor;
Füsun Türetken

My coding tutors;
Thomas Buxo
François Girard-Meunier

Website text is typeset in
William Text from Typotheque

Fonts used in demo images;
Figure 4, 6. Article from Matter of Sorts (captions)
Figure 6. Adobe Garamond Pro of Adobe Originals
Figure 6. Adobe Jenson Pro of Adobe Originals
Figure 6. Arno Pro of Adobe Originals
Figure 6. Palatino, designed by Hermann Zapf
Figure 6. Solera, designed by myself
Figure 7. Model, designed by myself

Royal Academy of Art, The Hague. Graphic Design thesis.
A whimsical look at the order of things, inverted.
Written by Nu in 2025.

Extras

There are a lot of ideas that I cut from the text, and some elaborations. This set of texts is a series of here-and-there ruminations on those cut ideas, on the meta of the text, and other things I want to say. If the thesis itself is not a blogpost, this certainly is. And you can bet I won’t be quoting relevant sources

Straight stems why?

TEFF Trinité of Bram de Does has a 1° slant in its roman, and if its wide use in Dutch publishing proves anything, it’s that the convention of a straight stem is not a necessity for readability. This is one of the many paradoxes of doing things by eye, but yet following conventions or convenient structures. If everyone were doing everything by eye, we should expect subtle leans in all directions, nobody’s eyes see the same straight line when looking at a display; this I speculate is a case of computer-made fonts doing what the computer does conveniently, but also a form of ‘you can only see what you know’ — the only way we know it’s straight is because the computer is telling us so, but who’s to say that a straight stem should follow a vertical upwards line of pixels with no lean whatsoever? I personally think a vertical, straight stem often makes (web)pages too rigid.

If this were to be a longer text, I would probably also argue for drawing as an antidote — drafting, even, albeit without a ruler or french curves. Our arms can be trained to perform completely different movements much more consistently (e.g. playing any kind of musical instrument, writing any kind of script in any style) than our computers which notoriously operate on rectangular bounding boxes. I might even try to argue that paradigm-shifting typefaces will never happen in Glyphs or FontLab, although I do not know if I agree with myself on that yet.

Granularity

In writing, I’ve looked at — as expected for a text of a few thousand words — rather surface level aspects of the idea I’m discussing. In the world’s most unforeseen twist, there is more to it, and it can be applied at all levels. ‘Typeface design is all about’ or ‘Letter design is a matter of’ are surface level statements that, because of their simplicity, are also endlessly complex and inaccessible without a large time committent, they could span entire typographic corpora.

But we can make it small. It can be done with such simple qualities like relatedness (‘the bottom of the bowl of the ‹a› is related to the shoulder of the ‹n›’), or function (‘the purpose of a serif is a spacing device to establish an even stem rhythm’) or simple aesthetic considerations (‘the upper stem serifs on ascenders should be unrelated to the ductus of the x-height strokes to create a cleaner upper bound of the line’).

Even still, it can be made more granular. I could invent a view of letters in which a good serif is a multiple of some division of the stem width — perhaps a good serif must always be some multiple of the ‘⅓ stem’ unit. Then, do I calculate them differently for the ‹N›? Or is it that the width of the thin strokes of the ‹N› is already decided by the existence of such a metric? I use the word ‘good’ to describe a serif, but I don’t necessarily even have to agree with myself that this is a good practice and should be kept consistent. It can just be a convenient shorthand. Perhaps all diagonal diacritics should adhere to one of four angles, or pleasant proportions of the ‹þ› should decide the ascenders and descenders, rather than the opposite being true. Or perhaps the overshoot of the capitals should be determined by a pleasant-looking ‹Œ›. Maybe a serif font with non-concave stems should take between 3 months and 2 years to make, an a concave-stem one — between 6 months and 4 years. There are practical and impractical rules, theories of type, that can be used. Of course, these are still very simple. There is basically no limit to how comprehensive or informed or impractical a metaphysics (singular) of type can be. I bring up LeMo on several occasions because I think it’s a distinctly simple (but therefore very complex) and yet still restrictive metaphysics of type. If restrictions aren’t essential, they are at least very helpful — and of course, they can be subverted as needed.

The development of a theory is not an easy task. Just to be clear, what I’m advocating for is not simple tips of using the ‹Œ› to gauge the overshoots of capitals. The idea of ‘the ‹Œ› is a basis for vertical capital proportions’ carries broader implications than that, albeit still it is unrepresentative — when I speak of theories, I mean things that can take years or decades of research to develop. The Roman brush caps of Edward Catich (as shown in The Origin of the Serif), Noordzij’s The Stroke, Blokland’s LeMo, Fred Smeijers’ Counterpunch, etc. are more adequate examples. Basically I’m thinking of any systematised view of typeface design that doesn’t just shrug and say ‘just eyeball it’ — for one, because typeface designers generally know how to do that already and in later stages doing it is basically mandatory. The exact granularity and scope always depends on what exactly the theory pertains to and the range of possibilities is pretty much infinite.

Copying masters

The act of referencing is in itself an act of creating a new metaphysics of type, although an intuitive, visceral one. We think ‘oh. So the ‹k› can also look like this’ or in fact ‘it should look like this’. We might think ‘oh. So the small cap ‹A› can be wider than I thought’ or ‘oh. So the ‹r› can ‹s› can be set in the same character width — and both very narrow’.

What I like about it is that it is a sort of conceptual opposite to the idea of making theories or type broadly, regardless of how granular they get. A whole thesis could be written on this (false, albeit very compelling) dichotomy — or for the sake of comprehensiveness, duality — of referencing as an outside-in approach, and theorising as an inside-out approach to designing letters or anything else. When the final result is swayed by the looseness of our vision, it really doesn’t matter so much whether or not we begin approaching it from a point of rigidity or looseness — although it’s easier to make the looseness at the end palatable, in my experience, when starting from a place of rigidity.

The great conspiracy

In CONSPIRACY, Natalie Wynn of Contrapoints calls conspiracies a kind of ‘occult divination’, or being ‘to history and media analysis what astrology is to astronomy’ — ‘pre-modern thinking, post-modern politics.’ I couldn’t help but notice that this phantasm, this cognitive dissonance, is not unlike what I describe in the modern typographic world — a pre-modern romanticism that I attribute to retrospectivity, and in my case a modernist thinking that I attribute to our vocabularies. But this is another thread — is it an explanation? Perhaps. Same as the origins of conspiracy in her video, partly stemming from the difficulty of accepting that disproportionately small reasons could lead to major outcomes, or here, conventions, it’s possible that our conventions come from much less magical sources than we imagine. Perhaps the reason for our views of type, a weird balance of romanticism and modernism, is not caused by our system of classifications, but rather the human search for grandeur leads to the adoption and stubborn ongoing use of the lexicons we use today.

I could speculate on the order in which these things come, or why they do so. But I wonder if it isn’t reductive to do so. I suppose what I am doing in the end is providing language as a tool of analysis for thought and design, or as a design tool in itself. Does it correspond to aesthetic conditioning? It must, few things are less aesthetic-adjacent than literal theories of design. Do we favour the aesthetic of historicity? Of measurability? Or mathematical perfection? Of the fundamental humanity behind the craft? The answer to any of these could be; yes. It’s their usage, though, that matters. The words for the categories are just representations of the applied metaphysics they allude to. Our motivation to use certain systems over others is not so important to what I’m discussing, but nevertheless rather interesting.

If I had 3 more months / Relaxed musings

When talking with friends, I heard it noted — and felt similarly myself — that with another few months, any one of us would have written a completely different thesis. Or one of the ideas presented would become the thesis. For me, that singular idea would be the metaphysics of type, the specifics of what knowledge creates which views, how the process of theory and reference prods at type from different angles. I think if I have a complaint with the text, it’s that it spends so much time providing context and looking at the basics, it doesn’t have the space to move into these ideas as they become more interesting, more convoluted, as they break, as they interact.

Perhaps that’s another text, but even an additional 3 months is not a realistic target to write it. In one of those conversations with friends I added: ‘If I had 4 more years to write this thesis, it would surely become a good blogpost’. I live by that. If I don’t ever write any subsequent text on the metaphysics of type, this section is there to spark some ideas in the reader about where it might have went. The philosophy-flavoured musings I’m working on are there mostly to engage with gratuitous contemplation so I’m not particularly phased about leaving them unfinished in written form.

This website

When making this website, I decided to make the writing and coding one process. As such, I have one .docx file in which this whole text lies, and an absolutely devastating terminal script (pictured below) creates my HTML. The CSS and JS are written manually. I tried to make them work without referencing any specific names and although I gave up and there is more ‘hard’ code in the process than I’d like ideally, I still have not written a line of HTML in this whole thing and that’s technically a success.

In general, I wanted to treat the process of writing and thinking and presenting a typeset version of this website as one whole thing. Is that a conceptual reason? Sure, I guess.. I’m writing all about thinking, about thought itself being a vehicle of design. I bring up this preposterous hypothesis as a starting point for discussing what I’m really saying, a rather obvious idea that by choosing to name different categories, we can direct our gaze into specific aspects of design such that they produce a different affect. So one could argue that I went out of my way to place myself in a way of thinking that would create a certain type of output; working with only .docx in LibreOffice, every piece of styling would have to be a piece of CSS or JS that responds to some character style, to some paragraph style, or to some predictable progression of objects that I could sort out with CSS selectors. When adding additional constraints, ‘two type sizes’, ‘one typeface’, ‘only roman and italic’, I put myself in a position where past me prioritising a certain esoteric view of typography and idolising maximally simple solutions to typesetting problems has put current me into a position of having to work with direction-based design, a weird kind of responsive LaTeX for the web where I do not have half of the control I wished I did.

I could say that. I could also say that I didn’t want to be bothered to change my text in two locations (I’m a messy editor) and writing up a bunch of annoying HTML tags if there was already an open source machine, albeit in unassembled parts, to do that for me. This is also true. In the end I wouldn’t claim a full separation of the two ideas. I feel an aesthetic draw towards certain types of typesetting which allow me to make certain decisions. Similar aesthetic impulses direct my understanding of typeface design. It’s no wonder these two approaches can be connected somehow; they are a product of the same mind thinking across the same lines.