<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Amin Khansari's Notes]]></title><description><![CDATA[Amin Khansari's Notes]]></description><link>https://akhansari.tech</link><generator>RSS for Node</generator><lastBuildDate>Fri, 10 Apr 2026 05:11:20 GMT</lastBuildDate><atom:link href="https://akhansari.tech/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Slop Coding Era]]></title><description><![CDATA[Open LinkedIn on any given day and you'll find dozens of posts proudly declaring: "I don't write code anymore, AI does it all for me." Scroll through the comments, and you'll see a chorus of agreement. We've entered what I call the slop coding era, w...]]></description><link>https://akhansari.tech/the-slop-coding-era</link><guid isPermaLink="true">https://akhansari.tech/the-slop-coding-era</guid><category><![CDATA[AI]]></category><category><![CDATA[DDD]]></category><dc:creator><![CDATA[Amin Khansari]]></dc:creator><pubDate>Thu, 12 Feb 2026 20:07:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/OA0qcP6GOw0/upload/2413c8dad3f40f1717e0c77630e862cf.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Open LinkedIn on any given day and you'll find dozens of posts proudly declaring: "I don't write code anymore, AI does it all for me." Scroll through the comments, and you'll see a chorus of agreement. We've entered what I call the slop coding era, where quantity of output is celebrated and quality and outcome is an afterthought.</p>
<h2 id="heading-the-hype-machine">The hype machine</h2>
<p>Let's start with the terminology. The industry keeps calling it "AI" as if we're dealing with actual intelligence. We're not. These are generative language models trained on statistical patterns. Calling it AI is arguably the biggest marketing scam of our decade. It sounds impressive, it sells tokens, and it makes everyone feel like the future has arrived. But precision in language matters, especially for engineers.</p>
<p>Now look at who's pushing the "I never code anymore" narrative. Most of them carry titles like "AI Engineer", "AI Trainer", or "AI Consultant." Their livelihood depends on the hype. Are they being honest, or is this only self-marketing? It's hard to tell. And that's part of the problem.</p>
<p>Then there are the AI providers themselves. They have every incentive to make us believe that coding is becoming obsolete. More belief in full automation means more token consumption. Are they being honest about what their tools can and can't do? You can decide for yourself.</p>
<p>Sometimes also it feels like this is the revenge of low-skill developers. Those who never invested in understanding software design, architecture, or maintainability now claim the playing field has been leveled. But producing code and engineering software are two very different things. These are the same people who produce spaghetti code in record time, which makes managers so happy. However, they leave the company after a few months, leaving behind a huge technical debt. Continuous illusion of hardworking and productive managers and “experts“.</p>
<h2 id="heading-the-danger-hidden-in-the-darkness">The danger hidden in the darkness</h2>
<p>Here's something that should concern every serious developer: the more you rely on GenAI to write your code, the more your own skills degrade. It's a slow process. You stop thinking through problems because the tool gives you an answer in seconds. You stop learning patterns because you never need to write them from scratch. One day you realize you can't debug the code your own tool generated or even you can't prompt because you no longer deeply understand what's happening.</p>
<p>And the code itself? It's not always good. GenAI doesn't understand your domain, your constraints, or the long-term consequences of its choices. As the codebase grows, the generated code becomes a nightmare to maintain and debug.</p>
<p>This has a broader consequence that few people discuss: innovation and open-source could suffer from this. GenAI struggles with anything outside the well-trodden path. It works best with mainstream frameworks and popular libraries because that's what it was trained on. If developers increasingly depend on it, who will build the next unconventional framework or library? Who will challenge existing paradigms? The tools are biased toward the average, and they'll push us all in that direction if we let them.</p>
<h2 id="heading-the-false-promise-of-techniques">The false promise of techniques</h2>
<p>"Just use agentic loops." "Just write specs and let the AI implement them." "Just do test-driven development with AI." "Just follow the latest Anthropic hype" I've heard all of these, and I've tried most of them. They work for some cases, but they are not sustainable for everything.</p>
<p>These techniques add layers of complexity. You end up spending as much time steering the GenAI, reviewing its output, fixing its mistakes, and re-prompting as you would have spent writing the code yourself. Sometimes more. The illusion of productivity is strong, but the reality is often different.</p>
<h2 id="heading-what-genai-actually-does">What GenAI actually does</h2>
<p>I believe GenAI is not intended to generate code for humans. It generates code that is statistically close to what you ask for. It's reproducing patterns it has seen before, not solving problems. It doesn't reason about your architecture. It doesn't understand why a particular design decision matters.</p>
<p>But is this a problem? Not always. For some use cases, "statistically close" is perfectly fine. It's OK to have a verbose and twisted code as long as the LLM can understand it with minimal token consumption.</p>
<p>And that's where we need to be strategic.</p>
<h2 id="heading-domain-driven-design-to-the-rescue-again">Domain Driven Design to the rescue, again</h2>
<p>If you're familiar with Strategic Domain Driven Design, you know that not all parts of a software system carry the same weight. DDD distinguishes between:</p>
<ul>
<li><p><strong>Core domains</strong>: where your business differentiates itself. This is where competitive advantage lives and where the deepest understanding of the problem is required.</p>
</li>
<li><p><strong>Supporting domains</strong>: necessary but not differentiating. They support the core but don't define your business.</p>
</li>
<li><p><strong>Generic domains</strong>: solved problems. Authentication, email sending, CRUD operations, standard integrations.</p>
</li>
</ul>
<p>This distinction is exactly what we need to decide where GenAI belongs.</p>
<p>Supporting domains, sometimes generic domains, or small microservices are good candidates for GenAI-assisted development. The patterns are known, the stakes are lower, and "good enough" is truly good enough. You don't even need to read or fully understand every line of the generated code, as long as it works and passes tests, it serves its purpose. Let the tool do the heavy lifting there.</p>
<p>But for core domains, it's the opposite. We should absolutely resist the temptation to hand over control. This is where your deepest business logic lives. This is where subtle bugs cost real money. This is where you need to understand every line of code, and where "statistically close" is not acceptable.</p>
<p>In core domains, GenAI should be limited to:</p>
<ul>
<li><p><strong>Completing a function</strong> you've already started writing</p>
</li>
<li><p><strong>Simplifying or optimizing</strong> an existing piece of code</p>
</li>
<li><p><strong>Replicating a well-written feature</strong> following patterns you've established</p>
</li>
<li><p><strong>Handling repetitive code</strong> that follows a clear, validated structure</p>
</li>
</ul>
<p>In other words, you lead, the tool follows. Not the other way around.</p>
<p>I wrote previously about <a target="_blank" href="/the-right-way-to-code-with-genai">the right way to code with GenAI</a> that you can check out.</p>
<h2 id="heading-stop-following-the-hype">Stop following the hype</h2>
<p>I'm tired of the trends. I'm tired of the breathless LinkedIn posts. After testing many solutions and approaches, I've found my own way of using GenAI, and it starts with being honest about what it can and can't do.</p>
<p>The developers who will build lasting software are those who keep their skills sharp, think critically about their tools, and refuse to let marketing narratives dictate how they work. GenAI is useful. It's not magic. And treating it as magic is how you end up with a codebase nobody can maintain, not even the GenAI itself.</p>
]]></content:encoded></item><item><title><![CDATA[Mise: Everything in Its Place]]></title><description><![CDATA[Mise (pronounced "meez") is a polyglot tool version manager and task runner. The name comes from the French cooking term "mise en place," which means "everything in its place." Just like a chef prepares all ingredients before cooking, mise helps you ...]]></description><link>https://akhansari.tech/mise-everything-in-its-place</link><guid isPermaLink="true">https://akhansari.tech/mise-everything-in-its-place</guid><category><![CDATA[tooling]]></category><dc:creator><![CDATA[Amin Khansari]]></dc:creator><pubDate>Fri, 30 Jan 2026 18:34:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/KVacTm0QeEA/upload/98d8b31b2fd648859e88e0fa0753c334.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Mise (pronounced "meez") is a polyglot tool version manager and task runner. The name comes from the French cooking term "mise en place," which means "everything in its place." Just like a chef prepares all ingredients before cooking, mise helps you prepare your tooling and environments before coding.</p>
<p>If you work on multiple projects with different tool versions and many repetitive tasks, you know how messy things can get. One project needs Node 20, another needs Node 22. You have dozens of npm scripts, shell commands, and environment variables. This is where <strong>mise</strong> comes in. Mise replaces tools like nvm, pyenv, make, and even brew or parts of npm scripts. It gives you one simple tool to manage everything.</p>
<p>👉 <a target="_blank" href="https://mise.jdx.dev">https://mise.jdx.dev</a></p>
<h1 id="heading-tools">Tools</h1>
<p>The first pillar of mise helps you manage installations of programming language runtimes and other tools.</p>
<h2 id="heading-local-vs-global-tools">Local vs Global Tools</h2>
<p>Mise distinguishes between <strong>local</strong> (project-specific) and <strong>global</strong> (user-wide) tools.</p>
<p><strong>Local tools</strong> are defined in your project's <code>mise.toml</code> file:</p>
<pre><code class="lang-toml"><span class="hljs-section">[tools]</span>
<span class="hljs-attr">node</span> = <span class="hljs-string">"25"</span>
<span class="hljs-attr">rust</span> = <span class="hljs-string">"latest"</span>
</code></pre>
<p>When you run <code>mise use node@25</code> in a project directory, it updates the local <code>mise.toml</code> file.</p>
<p><strong>Global tools</strong> are defined in your home configuration (<code>~/.config/mise/config.toml</code>). Use the <code>-g</code> flag to install globally: <code>mise use -g node@latest</code>.</p>
<p>Global tools serve as defaults when no local configuration exists. This means you can have Node 25 as your global default, but a specific project can override it with Node 22.</p>
<p>When Mise is activated in your shell, it modifies your <code>PATH</code> to prioritize the correct tool versions.</p>
<h2 id="heading-registry-hundreds-of-tools-ready-to-use">Registry: Hundreds of Tools Ready to Use</h2>
<p>Mise includes a <a target="_blank" href="https://mise.jdx.dev/registry.html"><strong>registry</strong></a> with hundreds of pre-configured tools. You can use simple shorthand names.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Install without activating</span>
mise install -g pnpm@9

<span class="hljs-comment"># Search for tools</span>
mise registry | grep opencode
mise use -g opencode

<span class="hljs-comment"># Or the built-in search and use</span>
mise use -g

<span class="hljs-comment"># Shows outdated tool versions</span>
mise outdated

<span class="hljs-comment"># Upgrades outdated tools</span>
mise up

<span class="hljs-comment"># Run a command with specific tools</span>
mise <span class="hljs-built_in">exec</span> python@3.12 -- ./myscript.py

<span class="hljs-comment"># See which tools are active</span>
mise current

<span class="hljs-comment"># List all installed tools</span>
mise ls
</code></pre>
<h2 id="heading-backends-install-tools-from-anywhere">Backends: Install Tools from Anywhere</h2>
<p>When a tool is not in the registry, mise can install it from various <a target="_blank" href="https://mise.jdx.dev/dev-tools/backends/"><strong>backends</strong></a>. They are different sources where mise can find and install tools. Such as npm, dotnet, or even directly from github or http.</p>
<p>For instance, as I use Linux Fedora Cosmic Atomic, but <a target="_blank" href="https://ghostty.org/">Ghostty</a> terminal is not yet available in <a target="_blank" href="https://flathub.org">flatpak</a>, I can easily install it with Mise and track updates alongside my other tools:</p>
<pre><code class="lang-ini"><span class="hljs-comment"># mise use -g github:pkgforge-dev/ghostty-appimage</span>
<span class="hljs-comment"># or modify ~/.config/mise/config.toml</span>
<span class="hljs-section">[tools."github:pkgforge-dev/ghostty-appimage"]</span>
<span class="hljs-attr">version</span> = <span class="hljs-string">"latest"</span>
<span class="hljs-attr">bin</span> = <span class="hljs-string">"ghostty"</span> <span class="hljs-comment"># to rename the binary and remove the version number</span>
</code></pre>
<p>Then adding the icon manually (<code>~/.local/share/applications/ghostty.desktop</code>):</p>
<pre><code class="lang-ini"><span class="hljs-section">[Desktop Entry]</span>
<span class="hljs-attr">Type</span>=Application
<span class="hljs-attr">Name</span>=Ghostty
<span class="hljs-attr">Exec</span>=/home/akhansari/.local/share/mise/installs/github-pkgforge-dev-ghostty-appimage/latest/ghostty
<span class="hljs-attr">Icon</span>=/home/akhansari/Pictures/icons/ghostty.png
</code></pre>
<h1 id="heading-environments">Environments</h1>
<p>The second pillar of mise helps you manage <a target="_blank" href="https://mise.jdx.dev/environments/">environments</a>.</p>
<p>Every project needs environment variables: database URLs, API keys, feature flags, and more. Managing these across local development and other envs is often messy. Mise solves this problem by making environment variables part of your project configuration.</p>
<p>Add environment variables to the <code>[env]</code> section of your project’s Mise configurations:</p>
<pre><code class="lang-toml"><span class="hljs-comment"># mise.toml (in git)</span>

<span class="hljs-section">[env]</span>
<span class="hljs-attr">APP_ENV</span> = { required = <span class="hljs-literal">true</span> }
<span class="hljs-comment"># Or with helpful message</span>
<span class="hljs-attr">DATABASE_URL</span> = { required = <span class="hljs-string">"Connection string for PostgreSQL. Format: postgres://user:pass@host/db"</span> }

<span class="hljs-comment"># mise.local.toml (gitignore)</span>

<span class="hljs-section">[env]</span>
<span class="hljs-attr">APP_ENV</span> = <span class="hljs-string">"development"</span>
<span class="hljs-attr">DATABASE_URL</span> = <span class="hljs-string">"postgres://root:root@localhost"</span>

<span class="hljs-comment"># Helpers</span>

<span class="hljs-section">[env]</span>
<span class="hljs-comment"># Templates</span>
<span class="hljs-attr">DATABASE_URL</span> = <span class="hljs-string">"postgres://root:root@localhost/{{env.APP_ENV}}"</span>
<span class="hljs-comment"># Protecting sensitive values</span>
<span class="hljs-attr">API_KEY</span> = { value = <span class="hljs-string">"super-secret-key"</span>, redact = <span class="hljs-literal">true</span> }
</code></pre>
<p>When you enter the project directory, these variables are automatically set. When you leave, they are unset. No more forgetting to export variables or polluting your global shell environment.</p>
<h1 id="heading-tasks">Tasks</h1>
<p>The third pillar of mise is the <a target="_blank" href="https://mise.jdx.dev/tasks/">task runner</a>. It’s like Make but better.</p>
<p>Every project has commands you run repeatedly: start the dev server, run tests, build for production, deploy. These commands often live in different places, npm scripts, shell scripts, Makefiles, or just in your memory. Mise brings them all together in one place.</p>
<p>The best part? Tasks automatically have access to your mise environment. Your tools are on PATH, your environment variables are set. No more "command not found" errors because you forgot to activate something.</p>
<h2 id="heading-defining-tasks">Defining Tasks</h2>
<p>Tasks live in the <code>[tasks]</code> section of your <code>mise.toml</code>. Each task has a name and a command to run:</p>
<pre><code class="lang-toml"><span class="hljs-section">[tasks.serve]</span>
<span class="hljs-attr">run</span> = <span class="hljs-string">"pnpm run serve"</span>

<span class="hljs-section">[tasks.test]</span>
<span class="hljs-attr">run</span> = <span class="hljs-string">"pnpm run vitest"</span>

<span class="hljs-section">[tasks.build]</span>
<span class="hljs-attr">run</span> = <span class="hljs-string">"pnpm build"</span>

<span class="hljs-section">[tasks.iac-apply]</span>
<span class="hljs-attr">description</span> = <span class="hljs-string">"Creates or updates infrastructure"</span>
<span class="hljs-attr">dir</span> = <span class="hljs-string">"./iac"</span>
<span class="hljs-attr">run</span> = <span class="hljs-string">"tofu apply"</span>
</code></pre>
<p>Run any task with <code>mise run</code>:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Run a task</span>
mise run serve

<span class="hljs-comment"># Or search and run</span>
mise run

<span class="hljs-comment"># Run a task including mise.staging.toml</span>
mise --env staging run build

<span class="hljs-comment"># Pass additional params</span>
mise run <span class="hljs-built_in">test</span> --<span class="hljs-built_in">help</span>  <span class="hljs-comment"># This will show the Vitest's help</span>
</code></pre>
<h2 id="heading-task-composition">Task Composition</h2>
<p>Simple tasks are useful, but the real power comes from combining them. Mise offers two ways to compose tasks.</p>
<p><strong>Sequential execution</strong> runs tasks one after another. If any task fails, the chain stops:</p>
<pre><code class="lang-toml"><span class="hljs-section">[tasks.check-backend]</span>
<span class="hljs-attr">run</span> = <span class="hljs-string">"pnpm --filter backend check"</span>

<span class="hljs-section">[tasks.check-frontend]</span>
<span class="hljs-attr">run</span> = <span class="hljs-string">"pnpm --filter frontend check"</span>

<span class="hljs-section">[tasks.check]</span>
<span class="hljs-attr">run</span> = [
  { task = <span class="hljs-string">"check-backend"</span> },
  { task = <span class="hljs-string">"check-frontend"</span> }
]
</code></pre>
<p><strong>Dependencies</strong> ensure prerequisite tasks run first. Use <code>depends</code> when a task always needs something else:</p>
<pre><code class="lang-toml"><span class="hljs-section">[tasks.podman-push]</span>
<span class="hljs-attr">depends</span> = <span class="hljs-string">"podman-login"</span>
<span class="hljs-attr">run</span> = [
  { task = <span class="hljs-string">"podman-push-backend"</span> },
  { task = <span class="hljs-string">"podman-push-frontend"</span> }
]
</code></pre>
<p>The difference is subtle but important:</p>
<ul>
<li><p><strong>Sequential (</strong><code>run</code>): "Run A, then B, then C"</p>
</li>
<li><p><strong>Dependencies (</strong><code>depends</code>): "Before running this, make sure X is done"</p>
</li>
</ul>
<h1 id="heading-conclusion">Conclusion</h1>
<p>Dev Tooling is full of small annoyances: wrong tool versions, missing environment variables, forgotten commands, inconsistent setups between team members. Each problem is small, but together they waste hours every week.</p>
<p>Mise eliminates these problems with three simple ideas:</p>
<ol>
<li><p><strong>Dev Tools</strong>: Define your tool versions in code. Switch automatically when you change directories. And manage your global dev tools.</p>
</li>
<li><p><strong>Environments</strong>: Keep environment variables with your project, not scattered across shell configs and <code>.env</code> files. Validate them before things break.</p>
</li>
<li><p><strong>Tasks</strong>: Put your commands in one place. Compose them into pipelines. Make your CI configuration trivial.</p>
</li>
</ol>
<p>That is what "mise en place" means: everything in its place, ready to go.</p>
]]></content:encoded></item><item><title><![CDATA[Designing with Types: Making illegal states unrepresentable]]></title><description><![CDATA[In this post, we'll explore one of the most powerful principles in type-driven design: using the type system to "Make illegal states unrepresentable". When we encode our business rules directly into our types, the compiler becomes our first line of d...]]></description><link>https://akhansari.tech/designing-with-types-making-illegal-states-unrepresentable</link><guid isPermaLink="true">https://akhansari.tech/designing-with-types-making-illegal-states-unrepresentable</guid><category><![CDATA[#Domain-Driven-Design]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[effect-ts]]></category><dc:creator><![CDATA[Amin Khansari]]></dc:creator><pubDate>Mon, 08 Dec 2025 18:46:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765220730562/98c34ccb-38aa-4159-b406-c1037ec232a2.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this post, we'll explore one of the most powerful principles in type-driven design: using the type system to <em>"Make illegal states unrepresentable"</em>. When we encode our business rules directly into our types, the compiler becomes our first line of defense against bugs.</p>
<p>Let's return to our <code>Contact</code> type from the previous articles. Thanks to our refactoring, it's now well-structured:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">class</span> Contact <span class="hljs-keyword">extends</span> Schema.Class&lt;Contact&gt;(<span class="hljs-string">"Contact"</span>)({
    name: PersonalName,
    emailContactInfo: EmailContactInfo,
    postalContactInfo: PostalContactInfo,
}) {}
</code></pre>
<p>Now let's say we have a simple business rule: <em>"A contact must have an email or a postal address”.</em> Does our type conform to this rule?</p>
<p>The answer is no. The business rule implies that a contact might have an email address but no postal address, or vice versa. But as it stands, our type requires that a contact must always have <em>both</em> pieces of information.</p>
<h2 id="heading-the-naive-approach">The naive approach</h2>
<p>The answer seems obvious, make the addresses optional:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">class</span> Contact <span class="hljs-keyword">extends</span> Schema.Class&lt;Contact&gt;(<span class="hljs-string">"Contact"</span>)({
    name: PersonalName,
    emailContactInfo: Schema.optionalWith(EmailContactInfo, { <span class="hljs-keyword">as</span>: <span class="hljs-string">"Option"</span> }),
    postalContactInfo: Schema.optionalWith(PostalContactInfo, { <span class="hljs-keyword">as</span>: <span class="hljs-string">"Option"</span> }),
}) {}
</code></pre>
<p>But now we've gone too far the other way. In this design, it would be possible for a contact to have <em>neither</em> type of address at all. But the business rule says that at least one piece of information must be present.</p>
<p>What's the solution?</p>
<h2 id="heading-making-illegal-states-unrepresentable">Making illegal states unrepresentable</h2>
<p>If we think about the business rule carefully, we realize there are three possibilities:</p>
<ol>
<li><p>A contact only has an email address</p>
</li>
<li><p>A contact only has a postal address</p>
</li>
<li><p>A contact has both an email address and a postal address</p>
</li>
</ol>
<p>Once we put it like this, the solution becomes obvious. Use a union type with a case for each possibility. In Effect, we can model this elegantly using <code>Schema.Union</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">class</span> EmailOnly <span class="hljs-keyword">extends</span> Schema.TaggedClass&lt;EmailOnly&gt;()(<span class="hljs-string">"EmailOnly"</span>, {
    email: EmailContactInfo,
}) {}

<span class="hljs-keyword">class</span> PostOnly <span class="hljs-keyword">extends</span> Schema.TaggedClass&lt;PostOnly&gt;()(<span class="hljs-string">"PostOnly"</span>, {
    post: PostalContactInfo,
}) {}

<span class="hljs-keyword">class</span> EmailAndPost <span class="hljs-keyword">extends</span> Schema.TaggedClass&lt;EmailAndPost&gt;()(<span class="hljs-string">"EmailAndPost"</span>, {
    email: EmailContactInfo,
    post: PostalContactInfo,
}) {}

<span class="hljs-keyword">class</span> Contact <span class="hljs-keyword">extends</span> Schema.Class&lt;Contact&gt;(<span class="hljs-string">"Contact"</span>)({
    name: PersonalName,
    contactInfo: Schema.Union(EmailOnly, PostOnly, EmailAndPost),
}) {}
</code></pre>
<p>This design meets the requirements perfectly. All three cases are explicitly represented, and the fourth possible case (with no email or postal address at all) is not allowed.</p>
<h2 id="heading-how-to-use">How to use</h2>
<p>Now let's see how we might use this in practice:</p>
<pre><code class="lang-typescript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">createEmailOnlyContact</span>(<span class="hljs-params">model: {
    firstName: <span class="hljs-built_in">string</span>
    lastName: <span class="hljs-built_in">string</span>
    email: <span class="hljs-built_in">string</span>
}</span>): <span class="hljs-title">Effect</span>.<span class="hljs-title">Effect</span>&lt;<span class="hljs-title">Contact</span>, <span class="hljs-title">Brand</span>.<span class="hljs-title">Brand</span>.<span class="hljs-title">BrandErrors</span>, <span class="hljs-title">never</span>&gt; </span>{
    <span class="hljs-keyword">return</span> Effect.gen(<span class="hljs-function"><span class="hljs-keyword">function</span>* (<span class="hljs-params"></span>) </span>{
        <span class="hljs-keyword">const</span> emailAddress = <span class="hljs-keyword">yield</span>* EmailAddress.either(model.email)
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Contact({
            name: <span class="hljs-keyword">new</span> PersonalName({
                firstName: model.firstName,
                middleInitial: Option.none(),
                lastName: model.lastName,
            }),
            contactInfo: <span class="hljs-keyword">new</span> EmailOnly({
                email: <span class="hljs-keyword">new</span> EmailContactInfo({
                    emailAddress,
                    isEmailVerified: <span class="hljs-literal">false</span>,
                }),
            }),
        })
    })
}

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">getEmailAddress</span>(<span class="hljs-params">contact: Contact</span>): <span class="hljs-title">Option</span>.<span class="hljs-title">Option</span>&lt;<span class="hljs-title">EmailAddress</span>&gt; </span>{
    <span class="hljs-keyword">return</span> Match.value(contact.contactInfo).pipe(
        Match.tag(<span class="hljs-string">"EmailOnly"</span>, <span class="hljs-string">"EmailAndPost"</span>, <span class="hljs-function">(<span class="hljs-params">{ email }</span>) =&gt;</span>
            Option.some(email.emailAddress)
        ),
        Match.tag(<span class="hljs-string">"PostOnly"</span>, <span class="hljs-function">() =&gt;</span> Option.none()),
        Match.exhaustive
    )
}

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">updatePostalAddress</span>(<span class="hljs-params">
    contact: Contact,
    newPostalAddress: PostalContactInfo
</span>): <span class="hljs-title">Contact</span> </span>{
    <span class="hljs-keyword">const</span> newContactInfo = Match.value(contact.contactInfo).pipe(
        Match.tag(<span class="hljs-string">"EmailOnly"</span>, <span class="hljs-string">"EmailAndPost"</span>,
            <span class="hljs-function">(<span class="hljs-params">{ email }</span>) =&gt;</span> <span class="hljs-keyword">new</span> EmailAndPost({ email, post: newPostalAddress })
        ),
        Match.tag(<span class="hljs-string">"PostOnly"</span>,
            <span class="hljs-function">() =&gt;</span> <span class="hljs-keyword">new</span> PostOnly({ post: newPostalAddress })
        ),
        Match.exhaustive
    )
    <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Contact({
        name: contact.name,
        contactInfo: newContactInfo,
    })
}
</code></pre>
<p>The <code>Match.exhaustive</code> at the end ensures that if we ever add a new case to our <code>ContactInfo</code> union, the compiler will immediately tell us about every place we need to handle it.</p>
<h2 id="heading-why-bother-with-these-complicated-structure">Why bother with these complicated structure?</h2>
<p>At this point, you might be saying that we've made things unnecessarily complicated. I would respond with these points:</p>
<p>First, <strong>the business logic is complicated</strong>. There is no easy way to avoid it. If your code is not this complicated, you're not handling all the cases properly.</p>
<p>Second, <strong>if the logic is represented by types, it is automatically self-documenting</strong>. You can look at the union cases below and immediately see what the business rule is. You don't have to spend any time trying to analyze any other code:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">class</span> Contact <span class="hljs-keyword">extends</span> Schema.Class&lt;Contact&gt;(<span class="hljs-string">"Contact"</span>)({
    name: PersonalName,
    contactInfo: Schema.Union(EmailOnly, PostOnly, EmailAndPost),
}) {}
</code></pre>
<p>Just from reading this type definition, you know: a contact can have email only, postal address only, or both, but never neither.</p>
<p>Third, <strong>if the logic is represented by a type, any changes to the business rules will immediately create breaking changes</strong>, which is generally a good thing. If you add a fourth case, say <code>PhoneOnly</code>, the compiler will point out every place in your code-base where you need to handle it.</p>
<p>Fourth, as mentioned in previous posts, the Effect adds additional values to TypeScript types by default, including structural equality, immutability, validation and serialization, exhaustive pattern matching, compositions, and pipelines.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The key insight is this: when you find yourself using optional fields with implicit dependencies between them, consider whether a union type would better represent your domain.</p>
<p>Instead of writing code that checks <code>if (contact.email &amp;&amp; !contact.postalAddress)</code>, you write code that handles each explicit case. The compiler ensures you handle them all, and the types document what states are actually possible.</p>
<p>This is what "Making illegal states unrepresentable" means in practice. It's not about writing more code, it's about writing code where the <em>wrong</em> code simply won't compile.</p>
<p>In the next article, we'll explore how applying this principle can lead to discovering new domain concepts that weren't obvious at first.</p>
<hr />
<p><em>This article was inspired by Scott Wlaschin's excellent "Designing with types" series on</em> <a target="_blank" href="https://fsharpforfunandprofit.com/posts/designing-with-types-making-illegal-states-unrepresentable/"><em>F# for Fun and Profit</em></a><em>, adapted for the TypeScript and Effect ecosystem.</em></p>
]]></content:encoded></item><item><title><![CDATA[The Right Way to Code with GenAI]]></title><description><![CDATA[The rise of GenAI tools for coding has changed how many developers work. These tools can write code, explain concepts, and suggest solutions in seconds. But with this power comes an important question: how should we actually use these tools? The answ...]]></description><link>https://akhansari.tech/the-right-way-to-code-with-genai</link><guid isPermaLink="true">https://akhansari.tech/the-right-way-to-code-with-genai</guid><category><![CDATA[llm]]></category><category><![CDATA[AI]]></category><category><![CDATA[coding]]></category><dc:creator><![CDATA[Amin Khansari]]></dc:creator><pubDate>Fri, 05 Dec 2025 13:06:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/R4WCbazrD1g/upload/87dd6a218f5dfa583ba058322cbb42d2.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The rise of GenAI tools for coding has changed how many developers work. These tools can write code, explain concepts, and suggest solutions in seconds. But with this power comes an important question: how should we actually use these tools? The answer matters more than most people realize.</p>
<blockquote>
<p>… study shows more than 800 popular GitHub projects with code quality degrading after adopting AI tools. … there’s a real risk that newer models will reinforce and amplify those trends, producing even worse code over time. - <a target="_blank" href="https://blog.robbowley.net/2025/12/04/ai-is-still-making-code-worse-a-new-cmu-study-confirms/">Rob Bowley</a></p>
</blockquote>
<h2 id="heading-you-are-the-developer-not-the-genai">You are the developer, not the GenAI</h2>
<p>The most important principle is simple: GenAI is your <em>assistant</em>, not the one that controls you and dictates what you should do. This might sound obvious, but it's easy to fall into the opposite pattern. When you start accepting AI suggestions without understanding them, or when you shape your work around what the AI can produce, the roles have reversed. You've become the assistant to your own tool, trapped in illusion of productivity.</p>
<p>A good relationship with GenAI looks like this: you decide what to build, you set the standards, and you validate the output. The AI handles the mechanical parts. Think of it like having a junior developer who types very fast but needs supervision.</p>
<h2 id="heading-let-genai-handle-the-repetitive-work">Let GenAI handle the repetitive work</h2>
<p>GenAI shines when the task is repetitive and well-defined. Writing boilerplate code, generating new code for known patterns, refactoring, converting data formats, or creating a handler for your service, these are perfect use cases. The pattern already exists, and you just need more of it.</p>
<p>Here's something many developers overlook: LLMs are trained on vast amounts of code from the internet, and most of that code is average at best. The AI learned from millions of mediocre examples. GenAI is better at reproducing patterns than creating excellent new ones.</p>
<p>The practical consequence? Write good code yourself first. Establish your patterns, your conventions, your architecture. Then let GenAI replicate what you've created. When it has your high-quality examples as context, it produces much better results than when it works from its general training alone. New GenAI tools add your existing code into the context before cooking a new one.</p>
<h2 id="heading-grow-your-knowledge-dont-outsource-it">Grow your knowledge, don't outsource It</h2>
<p>It's tempting to let GenAI handle everything you don't understand. Don't! Every time you accept code you can't explain, you create technical debt in your own mind. You become dependent on a tool to maintain your own project.</p>
<p>Instead, use GenAI as a learning accelerator. Ask it to explain or find concepts. Have it break down complex algorithms step by step. Use it to explore different approaches to the same problem. The goal is to understand more after each interaction, not less.</p>
<p>A developer who uses GenAI well ends up knowing more than before. A developer who uses it poorly ends up knowing less while producing more code, a dangerous combination.</p>
<h2 id="heading-write-new-things-yourself-first">Write new things yourself first</h2>
<p>When you're building something genuinely new, like a novel algorithm, a unique business logic, your software architecture, or a creative solution, write it yourself first. Your initial version might be rough, but it will be authentically yours. It will reflect the context and your actual understanding of the problem.</p>
<p>Once you have working code, then bring in GenAI. Ask it to review your solution. Have it suggest optimizations. Let it simplify complex sections. This approach gives you two benefits: you maintain deep understanding of your code, and you get the AI's help making it better and you learn from it.</p>
<p>The reverse approach, asking GenAI to write novel code from scratch, often produces something that looks right but misses important details. The AI doesn't understand your specific context the way you do.</p>
<h2 id="heading-a-surprisingly-good-rubber-duck">A surprisingly good Rubber Duck</h2>
<p>Perhaps the most underrated use of GenAI is as a thinking partner. Traditional "rubber duck debugging" involves explaining your problem to an inanimate object to clarify your thoughts. GenAI takes this further because it actually responds.</p>
<p>Use GenAI to organize your thoughts before writing code. Describe the domain you're modeling and ask it to identify potential challenges. Explain a problem you're stuck on and let it ask clarifying questions. Discuss different architectural options and their trade-offs.</p>
<p>This kind of dialogue often reveals gaps in your thinking without any code being written at all. The AI serves as a mirror that helps you see your own ideas more clearly.</p>
<h2 id="heading-context-is-everything">Context is everything</h2>
<p>Here's a truth that separates effective GenAI users from frustrated ones: the quality of output depends almost entirely on the quality of context you provide. An LLM without context is like a skilled contractor who shows up with no blueprints. They might build something, but probably not what you need.</p>
<p>Good context means giving the AI everything it needs to understand your world. There are several ways to do this:</p>
<ul>
<li><p><strong>Plan mode before edit mode.</strong> Tools like <a target="_blank" href="https://opencode.ai/">Open Code</a> or Claude Code offer a plan mode. Use it to discuss and validate the approach before the AI touches any files. This back-and-forth clarifies context and catches misunderstandings early.<br />  For complex tasks, start with high-level domain context, then narrow to the specific module, then to the exact problem.</p>
</li>
<li><p><strong>Dedicated context files.</strong> Tools now support files like <a target="_blank" href="http://AGENTS.md"><code>AGENTS.md</code></a> in your repository. Define your architecture, conventions, and constraints once, and every AI interaction benefits.</p>
</li>
<li><p><strong>System prompts.</strong> When available, use them if necessary to establish the role, standards, and boundaries the AI should follow.</p>
</li>
</ul>
<p>The investment in good context pays off quickly. You spend less time correcting mistakes and more time on work that matters. Strategic Domain Driven Design shines even more in the GenAI era.</p>
<h2 id="heading-the-bottom-line">The bottom line</h2>
<p>GenAI is a powerful tool, but tools don't make decisions, developers do. Use it for repetition, not creation. Use it to learn, not to avoid learning. Let it help you think, not think for you.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">👉</div>
<div data-node-type="callout-text">The developers who will thrive are those who treat GenAI as amplification of their skills rather than a replacement for them. Stay in the driver's seat.</div>
</div>]]></content:encoded></item><item><title><![CDATA[Designing with Types: Wrapper types]]></title><description><![CDATA[At some point in your TypeScript journey, you've probably written code like this:
email: string
zipCode: string
stateCode: string

These fields are all defined as simple strings. But are they really just strings? Can you accidentally swap an email ad...]]></description><link>https://akhansari.tech/designing-with-types-wrapper-types</link><guid isPermaLink="true">https://akhansari.tech/designing-with-types-wrapper-types</guid><category><![CDATA[TypeScript]]></category><category><![CDATA[effect-ts]]></category><category><![CDATA[#Domain-Driven-Design]]></category><dc:creator><![CDATA[Amin Khansari]]></dc:creator><pubDate>Tue, 18 Nov 2025 18:51:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/8idFF2R_I6g/upload/6b383e9ce924c32f5ede54c83ce70bad.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At some point in your TypeScript journey, you've probably written code like this:</p>
<pre><code class="lang-typescript">email: <span class="hljs-built_in">string</span>
zipCode: <span class="hljs-built_in">string</span>
stateCode: <span class="hljs-built_in">string</span>
</code></pre>
<p>These fields are all defined as simple strings. But are they really just strings? Can you accidentally swap an email address with a zip code? In a type-safe world, this should be impossible.</p>
<p>In domain-driven design, an email address and a zip code are distinct concepts, not interchangeable strings. We want separate types so they can't be mixed up by mistake.</p>
<p>This has been known as good practice for years, but in many languages, creating hundreds of tiny wrapper types feels painful. This leads to "primitive obsession". The code smell where developers use primitive types everywhere instead of creating meaningful domain types.</p>
<p>With TypeScript and Effect, we have no excuse! It's straightforward to create these wrapper types, and Effect gives us powerful tools for validation and error handling.</p>
<h2 id="heading-creating-wrapper-types-with-branded-types">Creating Wrapper Types with Branded Types</h2>
<p>The simplest way to create a distinct type is to use TypeScript's branded types pattern. Effect provides built-in support for this through <a target="_blank" href="https://effect.website/docs/code-style/branded-types/">Brand</a>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Brand } <span class="hljs-keyword">from</span> <span class="hljs-string">"effect"</span>

<span class="hljs-keyword">type</span> EmailAddress = <span class="hljs-built_in">string</span> &amp; Brand.Brand&lt;<span class="hljs-string">"EmailAddress"</span>&gt;
<span class="hljs-keyword">type</span> ZipCode = <span class="hljs-built_in">string</span> &amp; Brand.Brand&lt;<span class="hljs-string">"ZipCode"</span>&gt;
<span class="hljs-keyword">type</span> StateCode = <span class="hljs-built_in">string</span> &amp; Brand.Brand&lt;<span class="hljs-string">"StateCode"</span>&gt;
</code></pre>
<p>These are still strings at runtime, but TypeScript treats them as incompatible types at compile time. You can't accidentally pass a <code>ZipCode</code> where an <code>EmailAddress</code> is expected.</p>
<h3 id="heading-simple-cases-with-brandnominal">Simple Cases with Brand.nominal</h3>
<p>If you don't need validation and just want to distinguish types at compile-time, use <code>Brand.nominal</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> UserId = Brand.nominal&lt;UserId&gt;()
<span class="hljs-keyword">const</span> ProductId = Brand.nominal&lt;ProductId&gt;()

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">getUser</span>(<span class="hljs-params">id: UserId</span>) </span>{
    <span class="hljs-comment">/* ... */</span>
}
<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">getProduct</span>(<span class="hljs-params">id: ProductId</span>) </span>{
    <span class="hljs-comment">/* ... */</span>
}

<span class="hljs-keyword">const</span> userId = UserId(<span class="hljs-number">42</span>)
<span class="hljs-keyword">const</span> productId = ProductId(<span class="hljs-number">42</span>)

getUser(productId) <span class="hljs-comment">// ❌ Type error</span>
getUser(userId) <span class="hljs-comment">// ✅ OK</span>
</code></pre>
<p>The <code>nominal</code> constructor doesn't do any runtime checks. It just adds the type brand. Use this when you want type safety without validation overhead.</p>
<p>It's also possible to use Brands as part the Schema:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">class</span> User <span class="hljs-keyword">extends</span> Schema.Class&lt;User&gt;(<span class="hljs-string">"User"</span>)({
    userId: Schema.String.pipe(Schema.brand(<span class="hljs-string">"UserId"</span>)),
    name: Schema.String,
}) {}
</code></pre>
<h3 id="heading-adding-validation-with-brandrefined">Adding Validation with Brand.refined</h3>
<p>For types that need validation (smart constructors), Effect's <code>Brand.refined</code> function lets you create branded types with built-in validation:</p>
<p>The <code>refined</code> function takes two parameters:</p>
<ol>
<li><p>A predicate function that returns true if the value is valid</p>
</li>
<li><p>An error function that creates a <code>BrandErrors</code> when validation fails</p>
</li>
</ol>
<p>Let's create refined constructors for our domain types:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">type</span> EmailAddress = <span class="hljs-built_in">string</span> &amp; Brand.Brand&lt;<span class="hljs-string">"EmailAddress"</span>&gt;

<span class="hljs-keyword">const</span> EmailAddress = Brand.refined&lt;EmailAddress&gt;(
    <span class="hljs-function">(<span class="hljs-params">value</span>) =&gt;</span> /^\S+@\S+\.\S+$/.test(value),
    <span class="hljs-function">(<span class="hljs-params">value</span>) =&gt;</span> Brand.error(<span class="hljs-string">`"<span class="hljs-subst">${value}</span>" is not a valid email address`</span>)
)
</code></pre>
<p>Notice how validation is defined through predicate functions. Once you have an <code>EmailAddress</code> instance, you know it's valid. The type system guarantees it.</p>
<h3 id="heading-validation-constructors">Validation Constructors</h3>
<p>The very basic usage is to call the constructor directly. This will throw an exception if validation fails.</p>
<pre><code class="lang-typescript">EmailAddress(<span class="hljs-string">"john@example.com"</span>)
</code></pre>
<p>But it is better to handle it gracefully. Effect provides several approaches, each with different trade-offs.</p>
<p>Sometimes you don't care about the specific error, just whether it succeeded.<br />The advantage here is simplicity. The disadvantage is losing error details. You don't know <em>why</em> validation failed.<br />It's usually best for numeric types or simple cases.</p>
<pre><code class="lang-typescript">EmailAddress.option(<span class="hljs-string">"john@example.com"</span>)
</code></pre>
<p>For detailed error information without exceptions, use <code>either</code>.<br />This gives you both success and failure information in a type-safe way.</p>
<pre><code class="lang-typescript">EmailAddress.either(<span class="hljs-string">"john@example.com"</span>)
</code></pre>
<h2 id="heading-encapsulation-and-type-safety">Encapsulation and Type Safety</h2>
<p>The beauty of branded types is that TypeScript prevents you from accidentally creating them without validation:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// ❌ This won't compile</span>
<span class="hljs-keyword">const</span> email: EmailAddress = <span class="hljs-string">"not-validated@example.com"</span>

<span class="hljs-comment">// ✅ Must go through the constructor</span>
<span class="hljs-keyword">const</span> email = EmailAddress(<span class="hljs-string">"validated@example.com"</span>)
</code></pre>
<p>This ensures that invalid data can never enter your domain, even by accident.</p>
<h3 id="heading-when-to-wrap-and-unwrap">When to Wrap and Unwrap</h3>
<p>You should create these wrapped types at service boundaries:</p>
<ul>
<li><p><strong>When wrapping:</strong> At the UI layer (form submissions), when loading from databases, or when receiving data from external APIs</p>
<pre><code class="lang-typescript">  <span class="hljs-comment">// external source</span>
  <span class="hljs-keyword">const</span> email = <span class="hljs-keyword">yield</span>* EmailAddress.either(formInput.email)
  <span class="hljs-comment">// trusted source</span>
  <span class="hljs-keyword">const</span> email = EmailAddress(formInput.email)
</code></pre>
</li>
<li><p><strong>When unwrapping:</strong> When persisting to databases, binding to UI elements, or sending data to external services</p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">const</span> emailDto: <span class="hljs-built_in">string</span> = email <span class="hljs-comment">// of type EmailAddress</span>
</code></pre>
</li>
</ul>
<p>The key insight is that once data enters your domain as a wrapped type, it stays wrapped. You rarely need to "unwrap" it within your business logic. You can use it directly since it's still a string/number/etc. at runtime.</p>
<h2 id="heading-a-complete-example-with-multiple-types">A Complete Example with Multiple Types</h2>
<p>Let's refactor our contact system using these patterns:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Brand, Effect, Schema } <span class="hljs-keyword">from</span> <span class="hljs-string">"effect"</span>

<span class="hljs-keyword">type</span> EmailAddress = <span class="hljs-built_in">string</span> &amp; Brand.Brand&lt;<span class="hljs-string">"EmailAddress"</span>&gt;
<span class="hljs-keyword">type</span> ZipCode = <span class="hljs-built_in">string</span> &amp; Brand.Brand&lt;<span class="hljs-string">"ZipCode"</span>&gt;
<span class="hljs-keyword">type</span> StateCode = <span class="hljs-built_in">string</span> &amp; Brand.Brand&lt;<span class="hljs-string">"StateCode"</span>&gt;

<span class="hljs-keyword">const</span> EmailAddress = Brand.refined&lt;EmailAddress&gt;(
    <span class="hljs-function">(<span class="hljs-params">value</span>) =&gt;</span> /^\S+@\S+\.\S+$/.test(value),
    <span class="hljs-function">(<span class="hljs-params">value</span>) =&gt;</span> Brand.error(<span class="hljs-string">`"<span class="hljs-subst">${value}</span>" is not a valid email address`</span>)
)

<span class="hljs-keyword">const</span> ZipCode = Brand.refined&lt;ZipCode&gt;(
    <span class="hljs-function">(<span class="hljs-params">value</span>) =&gt;</span> /^\d{<span class="hljs-number">5</span>}$/.test(value),
    <span class="hljs-function">(<span class="hljs-params">_</span>) =&gt;</span> Brand.error(<span class="hljs-string">`Zip code must be 5 digits`</span>)
)

<span class="hljs-keyword">const</span> StateCode = Brand.refined&lt;StateCode&gt;(
    <span class="hljs-function">(<span class="hljs-params">value</span>) =&gt;</span> {
        <span class="hljs-keyword">const</span> normalized = value.toUpperCase()
        <span class="hljs-keyword">return</span> [<span class="hljs-string">"AZ"</span>, <span class="hljs-string">"CA"</span>, <span class="hljs-string">"NY"</span>, <span class="hljs-string">"TX"</span>, <span class="hljs-string">"FL"</span>].includes(normalized)
    },
    <span class="hljs-function">(<span class="hljs-params">_</span>) =&gt;</span> Brand.error(<span class="hljs-string">`State code is not in list`</span>)
)

<span class="hljs-keyword">class</span> PostalAddress <span class="hljs-keyword">extends</span> Schema.Class&lt;PostalAddress&gt;(<span class="hljs-string">"PostalAddress"</span>)({
    address1: Schema.String,
    address2: Schema.String,
    city: Schema.String,
    state: Schema.String.pipe(Schema.fromBrand(StateCode)),
    zip: Schema.String.pipe(Schema.fromBrand(ZipCode)),
}) {}

<span class="hljs-keyword">class</span> PostalContactInfo <span class="hljs-keyword">extends</span> Schema.Class&lt;PostalContactInfo&gt;(<span class="hljs-string">"PostalContactInfo"</span>)({
    address: PostalAddress,
    isAddressValid: Schema.Boolean,
}) {}

<span class="hljs-keyword">class</span> PersonalName <span class="hljs-keyword">extends</span> Schema.Class&lt;PersonalName&gt;(<span class="hljs-string">"PersonalName"</span>)({
    firstName: Schema.String,
    middleInitial: Schema.optionalWith(Schema.String, { <span class="hljs-keyword">as</span>: <span class="hljs-string">"Option"</span> }),
    lastName: Schema.String,
}) {}

<span class="hljs-keyword">class</span> EmailContactInfo <span class="hljs-keyword">extends</span> Schema.Class&lt;EmailContactInfo&gt;(<span class="hljs-string">"EmailContactInfo"</span>)({
    emailAddress: Schema.String.pipe(Schema.fromBrand(EmailAddress)),
    isEmailVerified: Schema.Boolean,
}) {}

<span class="hljs-keyword">class</span> Contact <span class="hljs-keyword">extends</span> Schema.Class&lt;Contact&gt;(<span class="hljs-string">"Contact"</span>)({
    name: PersonalName,
    emailContactInfo: EmailContactInfo,
    postalContactInfo: PostalContactInfo,
}) {}

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">createPostalAddress</span>(<span class="hljs-params">model: {
    address1: <span class="hljs-built_in">string</span>
    address2: <span class="hljs-built_in">string</span>
    city: <span class="hljs-built_in">string</span>
    state: <span class="hljs-built_in">string</span>
    zip: <span class="hljs-built_in">string</span>
}</span>): <span class="hljs-title">Effect</span>.<span class="hljs-title">Effect</span>&lt;<span class="hljs-title">PostalAddress</span>, <span class="hljs-title">Brand</span>.<span class="hljs-title">Brand</span>.<span class="hljs-title">BrandErrors</span>, <span class="hljs-title">never</span>&gt; </span>{
    <span class="hljs-keyword">return</span> Effect.gen(<span class="hljs-function"><span class="hljs-keyword">function</span>* (<span class="hljs-params"></span>) </span>{
        <span class="hljs-keyword">const</span> state = <span class="hljs-keyword">yield</span>* StateCode.either(model.state)
        <span class="hljs-keyword">const</span> zip = <span class="hljs-keyword">yield</span>* ZipCode.either(model.zip)
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> PostalAddress({
            address1: model.address1,
            address2: model.address2,
            city: model.city,
            state,
            zip,
        })
    })
}
</code></pre>
<h2 id="heading-guidelines-summary">Guidelines Summary</h2>
<p>To wrap up, here are the key principles:</p>
<ul>
<li><p><strong>Use branded types to represent your domain accurately.</strong> Don't settle for primitive obsession.</p>
</li>
<li><p><strong>Validate at construction time with</strong> <code>Brand.refined</code>. Once created, branded types should always be valid.</p>
</li>
<li><p><strong>Use</strong> <code>Brand.nominal</code> for simple distinction without validation. When you just need compile-time type safety.</p>
</li>
<li><p><strong>Be explicit about validation failures.</strong> Use Effect's error handling to force callers to handle invalid cases.</p>
</li>
<li><p><strong>Leverage Effect's powerful abstractions.</strong> Brand, Schema, Either, and Option give you tools that make this pattern practical.</p>
</li>
<li><p><strong>Wrap at boundaries, keep wrapped throughout your domain.</strong> Validate once at the edge, trust the types internally.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Creating wrapper types with branded types might seem like extra work upfront, but it pays dividends:</p>
<ul>
<li><p><strong>Compile-time safety:</strong> Catch bugs before runtime</p>
</li>
<li><p><strong>Self-documenting code:</strong> Types express domain concepts clearly</p>
</li>
<li><p><strong>Validation guarantees:</strong> Invalid data can't enter your domain</p>
</li>
<li><p><strong>Better refactoring:</strong> The compiler helps you when types change</p>
</li>
<li><p><strong>Zero runtime overhead:</strong> Brands are erased at runtime</p>
</li>
</ul>
<p>There's no excuse for primitive obsession anymore!</p>
<hr />
<p><em>This article was inspired by Scott Wlaschin's excellent "Designing with types" series on</em> <a target="_blank" href="https://fsharpforfunandprofit.com/posts/designing-with-types-single-case-dus/"><em>F# for Fun and Profit</em></a><em>, adapted for the TypeScript and Effect ecosystem.</em></p>
]]></content:encoded></item><item><title><![CDATA[Designing with Types: Introduction]]></title><description><![CDATA[When we write code, we often think about types as just a way to avoid errors or make our IDE or TypeScript happy. But types can do much more than that. They can help us think about our problems, express our business rules, model our domain, and make ...]]></description><link>https://akhansari.tech/designing-with-types-introduction</link><guid isPermaLink="true">https://akhansari.tech/designing-with-types-introduction</guid><category><![CDATA[TypeScript]]></category><category><![CDATA[effect-ts]]></category><category><![CDATA[#Domain-Driven-Design]]></category><dc:creator><![CDATA[Amin Khansari]]></dc:creator><pubDate>Thu, 13 Nov 2025 15:56:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/JTJrat7OaLQ/upload/47bd98c3b2d2b79d0de0e041192456de.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When we write code, we often think about types as just a way to avoid errors or make our IDE or TypeScript happy. But types can do much more than that. They can help us think about our problems, express our business rules, model our domain, and make bad code impossible to write.</p>
<p>In this article, we'll explore how to use types as part of the design process in TypeScript, enhanced by the <a target="_blank" href="https://effect.website/">Effect</a> library. The careful use of types can make your design more transparent and improve correctness at the same time.</p>
<p>We'll focus on the "micro level" of design working at the lowest level of individual types and functions. While many of these concepts are possible in plain TypeScript, Effect's functional primitives make this kind of refactoring more natural and powerful. We'll let the type system and Effect guide us toward better solutions.</p>
<p>Sometimes, the best code is the code that simply cannot compile if it's wrong.</p>
<h2 id="heading-the-starting-point">The Starting Point</h2>
<p>Let's work with a common example: a <code>Contact</code> type. Here's what a typical implementation might look like:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">type</span> Contact = {
    firstName: <span class="hljs-built_in">string</span>
    middleInitial: <span class="hljs-built_in">string</span>
    lastName: <span class="hljs-built_in">string</span>
    emailAddress: <span class="hljs-built_in">string</span>
    <span class="hljs-comment">// true if ownership of email address is confirmed</span>
    isEmailVerified: <span class="hljs-built_in">boolean</span>
    address1: <span class="hljs-built_in">string</span>
    address2: <span class="hljs-built_in">string</span>
    city: <span class="hljs-built_in">string</span>
    state: <span class="hljs-built_in">string</span>
    zip: <span class="hljs-built_in">string</span>
    <span class="hljs-comment">// true if validated against address service</span>
    isAddressValid: <span class="hljs-built_in">boolean</span>
}
</code></pre>
<p>This looks straightforward, we've all seen something like this countless times. But how can we refactor this to make better use of the type system?</p>
<h2 id="heading-understanding-data-relationships">Understanding Data Relationships</h2>
<p>The first step is to analyze how the data is accessed and updated. For instance, would you ever update <code>zip</code> without also updating <code>address1</code>? Probably not. On the other hand, you might frequently update <code>emailAddress</code> without touching <code>firstName</code>.</p>
<p>This leads to our first principle:</p>
<p><strong>Use objects to group together data that must be consistent (atomic), but don't needlessly group unrelated data.</strong> In general, low coupling and high cohesion apply across all levels, from individual types and functions to the overall architecture.</p>
<p>In our Contact example, we can identify several natural groupings:</p>
<ul>
<li><p>The three name values form a cohesive set</p>
</li>
<li><p>The address values belong together</p>
</li>
<li><p>The email information is its own distinct set</p>
</li>
</ul>
<p>We also have validation flags like <code>isAddressValid</code> and <code>isEmailVerified</code>. Should these be part of their related sets? Yes, because they're dependent on those values. If the <code>emailAddress</code> changes, <code>isEmailVerified</code> should probably reset to false at the same time.</p>
<h2 id="heading-refactoring-with-better-structure">Refactoring with Better Structure</h2>
<p>Let's break down our monolithic Contact type. For the postal address, we can create two types: a generic <code>PostalAddress</code> and a context-specific <code>PostalContactInfo</code> that includes validation state.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">class</span> PostalAddress <span class="hljs-keyword">extends</span> Schema.Class&lt;PostalAddress&gt;(<span class="hljs-string">"PostalAddress"</span>)({
    address1: Schema.String,
    address2: Schema.String,
    city: Schema.String,
    state: Schema.String,
    zip: Schema.String,
}) {}

<span class="hljs-keyword">class</span> PostalContactInfo <span class="hljs-keyword">extends</span> Schema.Class&lt;PostalContactInfo&gt;(<span class="hljs-string">"PostalContactInfo"</span>)({
    address: PostalAddress,
    isAddressValid: Schema.Boolean,
}) {}
</code></pre>
<p>Using Effect's <a target="_blank" href="https://effect.website/docs/schema/classes/">Schema.Class</a> provides structural equality and immutability out of the box. Two important properties for domain modeling.</p>
<h2 id="heading-expressing-optionality-with-effects-option">Expressing Optionality with Effect's Option</h2>
<p>In the original design, <code>middleInitial</code> is a string, but not everyone has a middle initial. Using an empty string to represent "no value" is a common pattern, but it's implicit and error-prone. Effect provides the <code>Option</code> type to explicitly signal optionality:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">class</span> PersonalName <span class="hljs-keyword">extends</span> Schema.Class&lt;PersonalName&gt;(<span class="hljs-string">"PersonalName"</span>)({
    firstName: Schema.String,
    middleInitial: Schema.optionalWith(Schema.String, { <span class="hljs-keyword">as</span>: <span class="hljs-string">"Option"</span> }),
    lastName: Schema.String,
}) {}
</code></pre>
<p>With <code>optionalWith</code>, we make it impossible to forget that a value might be absent. The type system forces us to handle both cases.</p>
<h2 id="heading-the-complete-refactored-design">The Complete Refactored Design</h2>
<p>Here's our fully refactored Contact type:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Option, Schema } <span class="hljs-keyword">from</span> <span class="hljs-string">"effect"</span>

<span class="hljs-keyword">class</span> PostalAddress <span class="hljs-keyword">extends</span> Schema.Class&lt;PostalAddress&gt;(<span class="hljs-string">"PostalAddress"</span>)({
    address1: Schema.String,
    address2: Schema.String,
    city: Schema.String,
    state: Schema.String,
    zip: Schema.String,
}) {}

<span class="hljs-keyword">class</span> PostalContactInfo <span class="hljs-keyword">extends</span> Schema.Class&lt;PostalContactInfo&gt;(<span class="hljs-string">"PostalContactInfo"</span>)({
    address: PostalAddress,
    isAddressValid: Schema.Boolean,
}) {}

<span class="hljs-keyword">class</span> PersonalName <span class="hljs-keyword">extends</span> Schema.Class&lt;PersonalName&gt;(<span class="hljs-string">"PersonalName"</span>)({
    firstName: Schema.String,
    middleInitial: Schema.optionalWith(Schema.String, { <span class="hljs-keyword">as</span>: <span class="hljs-string">"Option"</span> }),
    lastName: Schema.String,
}) {}

<span class="hljs-keyword">class</span> EmailContactInfo <span class="hljs-keyword">extends</span> Schema.Class&lt;EmailContactInfo&gt;(<span class="hljs-string">"EmailContactInfo"</span>)({
    emailAddress: Schema.String,
    isEmailVerified: Schema.Boolean,
}) {}

<span class="hljs-keyword">class</span> Contact <span class="hljs-keyword">extends</span> Schema.Class&lt;Contact&gt;(<span class="hljs-string">"Contact"</span>)({
    name: PersonalName,
    emailContactInfo: EmailContactInfo,
    postalContactInfo: PostalContactInfo,
}) {}
</code></pre>
<h2 id="heading-creating-instances">Creating Instances</h2>
<p>With Effect's Data classes, creating instances is clean and benefits from structural equality:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> contact = <span class="hljs-keyword">new</span> Contact({
    name: <span class="hljs-keyword">new</span> PersonalName({
        firstName: <span class="hljs-string">"John"</span>,
        middleInitial: Option.some(<span class="hljs-string">"Q"</span>),
        lastName: <span class="hljs-string">"Doe"</span>,
    }),
    emailContactInfo: <span class="hljs-keyword">new</span> EmailContactInfo({
        emailAddress: <span class="hljs-string">"john@example.com"</span>,
        isEmailVerified: <span class="hljs-literal">false</span>,
    }),
    postalContactInfo: <span class="hljs-keyword">new</span> PostalContactInfo({
        address: <span class="hljs-keyword">new</span> PostalAddress({
            address1: <span class="hljs-string">"123 Main St"</span>,
            address2: <span class="hljs-string">"Apt 4B"</span>,
            city: <span class="hljs-string">"Springfield"</span>,
            state: <span class="hljs-string">"IL"</span>,
            zip: <span class="hljs-string">"62701"</span>,
        }),
        isAddressValid: <span class="hljs-literal">false</span>,
    }),
})
</code></pre>
<h2 id="heading-working-with-optional-values">Working with Optional Values</h2>
<p>Effect provides a rich API for working with <code>Option</code> types:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { pipe } <span class="hljs-keyword">from</span> <span class="hljs-string">"effect"</span>

<span class="hljs-keyword">const</span> displayName = pipe(
    contact.name.middleInitial,
    Option.match({
        onNone: <span class="hljs-function">() =&gt;</span> <span class="hljs-string">`<span class="hljs-subst">${contact.name.firstName}</span> <span class="hljs-subst">${contact.name.lastName}</span>`</span>,
        onSome: <span class="hljs-function">(<span class="hljs-params">initial</span>) =&gt;</span> <span class="hljs-string">`<span class="hljs-subst">${contact.name.firstName}</span> <span class="hljs-subst">${initial}</span>. <span class="hljs-subst">${contact.name.lastName}</span>`</span>,
    })
)

<span class="hljs-keyword">const</span> initial = pipe(
    contact.name.middleInitial,
    Option.getOrElse(<span class="hljs-function">() =&gt;</span> <span class="hljs-string">""</span>)
)
</code></pre>
<h2 id="heading-the-benefits">The Benefits</h2>
<p>We haven't written a single business logic function yet, but our code already better represents the domain. The refactored design gives us:</p>
<ol>
<li><p><strong>Explicit relationships</strong>: Related data is grouped together, making dependencies clear</p>
</li>
<li><p><strong>Type safety</strong>: The compiler prevents us from forgetting to handle optional values</p>
</li>
<li><p><strong>Immutability</strong>: Effect's Data classes are immutable by default, preventing accidental mutations</p>
</li>
<li><p><strong>Structural equality</strong>: Two contacts with the same values are considered equal</p>
</li>
<li><p><strong>Better documentation</strong>: The type structure itself documents the domain logic</p>
</li>
</ol>
<p>This is just the beginning. In the next steps, we could:</p>
<ul>
<li><p>Add branded types to prevent mixing up similar primitives (like ensuring an email string is actually validated)</p>
</li>
<li><p>Use Effect Schema for runtime validation</p>
</li>
<li><p>Create smart constructors that enforce business rules</p>
</li>
<li><p>Leverage Effect's error handling for validation failures</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>By thinking carefully about our types, we can encode domain knowledge directly into our code structure. TypeScript's type system, enhanced by Effect's functional primitives, gives us powerful tools to make illegal states unrepresentable and make our intentions explicit.</p>
<p>The key is to let the types and schema guide your design. When you find yourself writing comments to explain constraints or relationships, consider whether you could encode that information in the type system instead. Your future self, and your teammates, will thank you.</p>
<hr />
<p><em>This article was inspired by Scott Wlaschin's excellent "Designing with types" series on</em> <a target="_blank" href="https://fsharpforfunandprofit.com/posts/designing-with-types-intro/"><em>F# for Fun and Profit</em></a><em>, adapted for the TypeScript and Effect ecosystem.</em></p>
]]></content:encoded></item><item><title><![CDATA[A Survival Guide to Build Healthy Tech Organizations]]></title><description><![CDATA[In today's fast-changing technology world, how we organize our teams and build our software can make the difference between success and failure. Many companies learn these lessons the hard way, through expensive mistakes and delayed projects. Let's e...]]></description><link>https://akhansari.tech/a-survival-guide-to-build-healthy-tech-organizations</link><guid isPermaLink="true">https://akhansari.tech/a-survival-guide-to-build-healthy-tech-organizations</guid><category><![CDATA[tech culture]]></category><category><![CDATA[organization]]></category><dc:creator><![CDATA[Amin Khansari]]></dc:creator><pubDate>Wed, 10 Sep 2025 19:15:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757522944714/c22c86e0-224b-4e39-a5f3-e934e557fec1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In today's fast-changing technology world, how we organize our teams and build our software can make the difference between success and failure. Many companies learn these lessons the hard way, through expensive mistakes and delayed projects. Let's explore some opinionated fundamental principles that could help build better products and create healthier work environments, starting with the technical foundations and building up to the cultural practices that make organizations successful.</p>
<p>These are not new concepts, but ones that I consider to be part of the canon. It's stronger on the “what” and “why” than the “how”.</p>
<h2 id="heading-the-technical-foundation-high-cohesion-and-low-coupling">The Technical Foundation: High Cohesion and Low Coupling</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757523686452/160f28c2-9a42-4c0c-b6cd-8b1066222102.webp" alt="High Cohesion and Low Coupling (from Haptik)" class="image--center mx-auto" /></p>
<p>Before we can understand how to organize teams or share knowledge effectively, we need to grasp two fundamental principles that guide all good software architecture: high cohesion and low coupling. These aren't just technical concepts, they shape how we think about organizing entire companies.</p>
<p>High cohesion means keeping related things together. If you have code that handles user authentication, all of that code should be in one place, maintained by one team. It's like keeping all your cooking utensils in the kitchen. They belong together because they serve the same purpose. Low coupling means minimizing dependencies between different parts of the system. Each component should be able to work independently as much as possible. Think of it like electrical outlets in your home. You can plug in a lamp without needing to know how the television works. When systems have low coupling, you can change one part without breaking everything else.</p>
<p>These principles don't just apply to code, they apply to how we organize our teams and define their responsibilities. This brings us to one of the most important decisions a tech company makes: how to structure its domains.</p>
<h2 id="heading-understanding-domains-boundaries-autonomy-and-accountability">Understanding Domains: Boundaries, Autonomy, and Accountability</h2>
<p>When we talk about domains in a tech company, we're talking about different areas of the business that have their own specific purposes and knowledge. Think of domains like different departments in a hospital: emergency care, surgery, and administration all serve different functions and require different expertise. The way we define these domains should follow our principles of high cohesion and low coupling, keeping related capabilities together while minimizing dependencies between different areas.</p>
<p>One common mistake companies make is creating organizational domains based on "actor domains", that means organizing teams around who uses the system rather than what the system does. For example, imagine an e-commerce company that creates separate teams for "customer systems," "seller systems," and "admin systems." This seems logical at first, but it violates our principle of high cohesion. The same business logic, like calculating prices or managing inventory, gets duplicated across multiple teams. When pricing rules change, three different teams need to update their code. This leads to inconsistencies, bugs, and wasted effort. Instead, it's better to organize around business capabilities like "pricing," "inventory," or "ordering", regardless of who uses these features.</p>
<p>But here's what many organizations miss: effective domains need clear boundaries. A boundary is like a fence around your property. It defines what's inside and what's outside, what you're responsible for and what you're not. In software terms, this means each domain should have a clear understanding of what concepts, data, and rules belong to it. The pricing domain owns everything about how prices are calculated, stored, and validated. It doesn't just have some pricing code scattered among other things; it has complete ownership of the pricing concept. When another team needs pricing information, they don't reach into the pricing domain's database or duplicate its logic. They ask the pricing domain through a well-defined interface, like calling an API or subscribing to price change events.</p>
<p>These boundaries are crucial because they enable something powerful: autonomy. When a domain has clear boundaries and owns its concepts completely, the team responsible for that domain can make decisions independently. The pricing team can change how they calculate discounts without asking permission from the ordering team. They can refactor their code, change their database schema, or even rewrite everything in a different programming language, as long as they maintain their contracts with other domains. This autonomy allows teams to move fast and innovate without being blocked by dependencies on other teams.</p>
<p>Similarly, the platform domain, the technical foundation that supports other teams, should focus on providing technical capabilities, not business logic. Think of the platform team as the people who build the roads, not the ones who decide where the cars should go. When platform teams start including shared business capabilities, they create unwanted coupling and violate domain boundaries. Every time the business needs change, multiple teams have to wait for the platform team to update the shared code. It's like having only one kitchen in an apartment building, everyone has to wait their turn to cook.</p>
<p>But autonomy without alignment is just chaos. Imagine if every domain team optimized only for their own goals without considering the bigger picture. The pricing team might create the most sophisticated pricing engine in the world, but if it takes five seconds to calculate a price, the customer experience suffers. This is where alignment becomes critical. All domains need to work toward the same company goals, even while maintaining their autonomy. If the company goal is to provide instant checkout, then the pricing domain needs to balance sophistication with speed. The inventory domain needs to ensure stock checks are fast. The payment domain needs to process transactions quickly. Each team has autonomy in how they achieve these goals, but the goals themselves are shared.</p>
<p>This balance between autonomy and alignment requires clear accountability. When a team owns a domain with clear boundaries and has the autonomy to make decisions, they also become accountable for the outcomes. If the pricing service goes down during Black Friday, the pricing team can't blame the platform team or the database team. They own their domain completely, including its reliability. This accountability might seem harsh, but it's actually empowering. When teams know they're truly responsible for their domain's success, they make better decisions. They think about monitoring, testing, and reliability from the start, not as afterthoughts. They document their decisions because they know they'll have to live with the consequences.</p>
<p>The relationship between boundaries, autonomy, alignment, and accountability creates a powerful dynamic. Clear boundaries enable autonomy by defining what each team owns. Autonomy enables teams to move fast and innovate. Alignment ensures that this speed and innovation serve the company's goals. Accountability ensures that teams take their responsibilities seriously and build sustainable solutions. When all four elements work together, you get teams that are both independent and collaborative, fast and reliable, innovative and responsible.</p>
<p>But even when we organize our domains correctly with clear boundaries and balanced autonomy, we still need to make strategic decisions about where to focus our efforts. Not all domains deserve equal attention.</p>
<h2 id="heading-identifying-what-really-matters-understanding-domain-types">Identifying What Really Matters: Understanding Domain Types</h2>
<p><img src="https://miro.medium.com/v2/resize:fit:1400/1*eXtFuCxdStajdCioEEsgkw.jpeg" alt /></p>
<p>To make smart decisions about where to invest your resources, you need to understand that not all parts of your business are equally important. In domain-driven design, we typically classify domains into three categories: core domains, supporting domains, and generic domains. Think of these like the different components of a restaurant. The signature dishes that customers come for represent core domains, the custom reservation system might be a supporting domain, and basic accounting software would be a generic domain.</p>
<p>Core domains are the areas that make your company special and different from competitors. For Netflix, the recommendation algorithm is a core domain. It's what keeps customers coming back. For Amazon, logistics and delivery might be core domains. These areas deserve the most attention, the best developers, and the most investment. Core domains are where you want to innovate and excel because they directly impact your competitive advantage. If you're not the best in the world at your core domains, or at least trying to be, your business is in trouble.</p>
<p>Supporting domains are important for your business but don't differentiate you from competitors. Imagine an online retailer's return processing system. It needs to work well because poor return handling frustrates customers, but having a slightly better return process than competitors won't win you the market. Supporting domains need to be good enough to not cause problems, but they don't need to be revolutionary. You might build these in-house if they have specific requirements unique to your business, but you won't put your top talent here.</p>
<p>Generic domains are areas where your needs are the same as everyone else's. Every company needs email, basic accounting, or employee time tracking, but these don't make your business special. For generic domains, buying or using existing solutions almost always makes more sense than building your own. It's like a restaurant buying dishwashers instead of inventing their own. The problem has been solved, and solving it again adds no value to your business.</p>
<p>Understanding these distinctions helps you avoid one of the most common mistakes in tech companies: spreading resources evenly across all domains. When companies fail to identify and prioritize their core domains, they might spend equal time perfecting their internal expense reporting system as they do on the features that actually attract and retain customers. It's like a restaurant spending as much time organizing their storage closet as they do perfecting their signature dishes, it doesn't make business sense.</p>
<p>But here's where it gets interesting: domains don't stay in the same category forever. What's core today might become supporting tomorrow, and what's currently custom-built might become available as a generic solution next year. This is where Wardley mapping becomes incredibly valuable. Wardley mapping is a strategic planning technique that helps you visualize your business components and understand how they evolve over time.</p>
<p>Imagine you're running a company in 2010 that does real-time video streaming. Back then, the technology to stream video efficiently was a core domain (it was hard) and few companies could do it well, and it provided competitive advantage. But over the years, as cloud providers began offering video streaming services and the technology became commoditized, it evolved from core to generic. Companies that recognized this evolution early could shift their resources from maintaining streaming infrastructure to building better content or user experiences. Those that didn't wasted valuable resources maintaining something that they could have simply purchased without creating a point of return.</p>
<p>Wardley mapping helps you see this evolution visually. You map out all your business components on a chart that shows both how visible they are to users and how evolved they are from custom-built to commodity. A brand new machine learning algorithm might be on the left side of the map showing it's custom and innovative, while email hosting would be on the right showing it's a commodity. By creating these maps, you can see which components are evolving, where you should invest, and what you should stop building yourself.</p>
<p>The real power of combining domain classification with Wardley mapping is that it helps you make strategic decisions about the future, not just the present. You might identify that your current core domain is rapidly becoming commoditized and realize you need to find new areas for differentiation. Or you might spot an emerging technology in a supporting domain that could become core to your business if you invest early. It's like a surfer reading the waves, you need to position yourself where the wave will be, not where it is now.</p>
<p>This strategic thinking about domains connects directly to how we organize our teams. When you clearly identify which domains are core, supporting, and generic, understand how they're evolving, and establish clear boundaries with appropriate autonomy and accountability, you can make better decisions about team structure, hiring, and resource allocation. This clarity helps prevent one of the biggest organizational challenges: silos.</p>
<h2 id="heading-breaking-down-walls-the-danger-of-silos">Breaking Down Walls: The Danger of Silos</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757524831893/66c27f8a-f1b5-4739-a30e-bcca6b4ba3cc.jpeg" alt class="image--center mx-auto" /></p>
<p>Silos happen when different teams or departments work in isolation, rarely communicating or collaborating with each other. Picture a company where the development team never talks to the operations team, and the operations team never talks to customer support. When a customer reports a problem, it takes days or weeks to fix because information moves slowly between these isolated groups. Even well-designed domains can become silos if we're not careful.</p>
<p>Breaking down silos means creating an environment where information flows freely and teams work together toward common goals. This doesn't mean everyone needs to know everything, but it does mean removing artificial barriers that prevent collaboration. When teams share knowledge and work together, problems get solved faster, and everyone learns from each other's experiences.</p>
<p>The antidote to silos isn't just telling people to communicate more. It's building systems and cultures that make collaboration natural and effortless and removing unnecessary layers. But here's something crucial that many organizations miss: you can't effectively break down silos if different teams operate with fundamentally different technical values and practices. This brings us to an essential but often overlooked principle: the need for a homogeneous technical culture.</p>
<h2 id="heading-building-a-unified-technical-culture-from-recruitment-to-daily-practice">Building a Unified Technical Culture: From Recruitment to Daily Practice</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757525490071/7fe8b97c-5d5b-41e6-8186-c626e63e3af0.jpeg" alt class="image--center mx-auto" /></p>
<p>When we talk about a homogeneous technical culture, it's important to clarify what we mean and what we don't mean. We're not talking about hiring people who all look the same, think the same, or come from the same backgrounds. Diversity of perspectives, experiences, and backgrounds is crucial for innovation and problem-solving. What we're talking about is creating alignment around technical values, practices, and approaches to problem-solving. Think of it like an orchestra: you want different instruments playing different parts, but they all need to be reading from the same sheet music and following the same conductor.</p>
<p>Imagine a company where half the developers believe in extensive testing and code reviews, while the other half believes in moving fast and fixing things in production. Or where some teams document everything meticulously while others rely entirely on tribal knowledge. These aren't just different approaches, they're fundamentally incompatible worldviews that create friction every time these teams need to work together. It's like trying to build a house where some workers are using metric measurements and others are using imperial. Even if everyone is skilled and well-intentioned, the result will be a mess.</p>
<p>A homogeneous technical culture means that everyone in the organization shares certain fundamental beliefs about how to build software. This might include agreements about code quality standards, testing practices, documentation requirements, how to handle technical debt, approaches to security, or methods for making technical decisions. When everyone shares these foundational beliefs, collaboration becomes smooth and natural. Teams can move between projects without culture shock, code can be shared without extensive rework, and decisions can be made quickly because everyone is working from the same playbook.</p>
<p>This cultural alignment needs to start from the very beginning at recruitment. Many companies make the mistake of hiring purely based on technical skills, assuming they can teach culture later. But technical culture isn't just about following rules; it's about deeply held beliefs about what good software looks like and how it should be built. If you hire someone who fundamentally believes that documentation is a waste of time into a culture that values comprehensive documentation, you're setting up both the individual and the team for frustration and conflict.</p>
<p>Smart recruitment for technical culture doesn't mean asking trick questions or requiring specific technologies. Instead, it means exploring how candidates think about software development. How do they approach debugging a complex problem? What's their philosophy on testing? How do they balance speed versus quality? What's their experience with code reviews, and what value do they see in them? You're not looking for perfect answers, but for alignment with your organization's values. A brilliant developer who believes in cowboy coding might not be a good fit for a team that values careful, methodical development no matter how talented they are.</p>
<p>But recruitment is just the beginning. Once people join your organization, you need to actively reinforce and develop the technical culture through comprehensive onboarding and continuous training. Too many companies treat onboarding as a bureaucratic checkbox here's your laptop, here's the wiki, good luck. But effective onboarding is where new team members internalize not just what your culture says, but how it actually works in practice, two way synchronization is needed.</p>
<p>Imagine joining a new company and spending your first two weeks pair programming with experienced developers, participating in code reviews, attending architecture discussions, and seeing how decisions actually get made. You're not just learning the codebase; you're absorbing the culture through osmosis. You see that when someone says "we value testing," they actually mean it, tests are written first, they're comprehensive, and pull requests without tests don't get merged. You learn that "we document our decisions" means there's an architectural decision record for every significant choice, not just a vague suggestion to write things down sometimes.</p>
<p>Continuous training and reinforcement keep the culture strong as the organization grows and evolves. This isn't just about sending people to conferences or online courses, though those can be valuable. It's about creating regular opportunities for teams to align on practices and share knowledge. Maybe you have weekly tech talks where teams present their approaches to solving problems. Perhaps you run internal workshops on testing strategies or debugging techniques. You might organize coding dojos where developers practice new techniques together in a safe environment.</p>
<p>The power of a homogeneous technical culture becomes especially apparent when things go wrong. In organizations with fragmented cultures, incidents become blame games. The "move fast" team blames the "move carefully" team for being too slow to respond. The "comprehensive testing" team blames the "ship it now" team for causing the problem in the first place. But in organizations with aligned technical culture, everyone shares the same values about quality, the same understanding of acceptable risks, and the same commitment to learning from failures. Instead of finger-pointing, there's collaborative problem-solving.</p>
<p>This cultural alignment also dramatically speeds up decision-making and reduces organizational friction. When everyone shares fundamental technical values, you don't need to debate basic principles in every meeting. You don't need extensive approval processes because teams can be trusted to make decisions aligned with organizational values. You don't need heroes to bridge between incompatible team cultures because there's only one culture. The organization can move faster because it's not constantly dealing with internal friction from conflicting approaches.</p>
<p>However, building and maintaining a homogeneous technical culture requires constant attention and investment. As organizations grow, especially through acquisitions or rapid hiring, it's easy for subcultures to develop. Different offices might evolve different practices. Teams working on different products might drift apart in their approaches. This is why successful companies treat technical culture as a strategic priority, not an HR afterthought. They invest in bringing teams together regularly, they're deliberate about which practices are mandatory versus flexible, and they constantly reinforce cultural values through their actions, not just their words.</p>
<p>The connection between homogeneous technical culture and our other principles is profound. When teams share technical values, breaking down silos becomes much easier because there's a common language and shared understanding. Transparency works better because everyone values and practices it consistently. The dangers of hero culture diminish because the culture emphasizes collective ownership and knowledge sharing. Even architectural principles like high cohesion and low coupling are easier to maintain when everyone understands and values them equally.</p>
<p>But maintaining this cultural alignment while avoiding stagnation requires thoughtful leadership. This brings us to a management approach that seems like it should create consistency but actually creates more problems than it solves.</p>
<h2 id="heading-moving-beyond-command-and-control-why-traditional-management-fails-in-tech">Moving Beyond Command and Control: Why Traditional Management Fails in Tech</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757526111800/b5d745a4-268c-4364-9b24-17b3634b3178.jpeg" alt class="image--center mx-auto" /></p>
<p>Command and control management comes from military and industrial age thinking, where work was predictable and repetitive. In this model, managers at the top make all important decisions, create detailed plans, and pass orders down through layers of hierarchy. Workers at the bottom follow these orders precisely, with little room for creativity or independent thinking. Think of it like a traditional factory assembly line where every movement is predetermined and workers simply execute predefined tasks.</p>
<p>This approach might work when building thousands of identical widgets, but software development is fundamentally different. Every problem is unique, technology changes rapidly, and the people closest to the code often have the best understanding of what's possible and what's needed. When a manager three levels up decides how long a feature should take without understanding the technical complexity, or when teams need approval for every small decision, the organization grinds to a halt.</p>
<p>Command and control creates several serious problems in tech companies. First, it destroys innovation because people stop thinking creatively when they're just following orders. Imagine a developer who sees a better way to solve a problem but knows they'll need three meetings and two weeks of approvals to try it. They'll probably just do what they were told instead. Second, it slows everything down because decisions bounce up and down the hierarchy like a ping-pong ball. By the time approval comes back, the market opportunity might be gone. Third, it demotivates talented people who joined the tech industry specifically because they wanted to solve interesting problems and build innovative solutions, not to be cogs in a machine.</p>
<p>Perhaps most dangerously, command and control creates an illusion of predictability and control that doesn't actually exist. Managers feel comfortable because they have detailed plans and status reports, but these often hide the real problems. Teams learn to report what management wants to hear rather than what's actually happening. It's like navigating with an outdated map, you might feel confident about where you're going, but you're actually lost.</p>
<p>So what's the alternative? Modern tech companies need to value risk management over control and fear. They need management approaches that embrace uncertainty and empower teams while still maintaining alignment and accountability. One powerful alternative is servant leadership, where managers see their role as supporting and enabling teams rather than commanding them. Instead of saying "do this," servant leaders ask "what do you need to succeed?" They remove obstacles, provide resources, and protect teams from organizational dysfunction while trusting them to make good decisions.</p>
<p>Another effective approach is that leadership sets clear objectives and constraints. The "what" and the "why", but leaves the "how" up to the teams. For example, instead of telling a team exactly how to build a feature, leadership might say "we need to reduce customer churn by 10% this quarter, and we have this budget to work with." The team then figures out the best way to achieve that goal, using their expertise and creativity.</p>
<p>Some organizations adopt collaborative decision-making. These approaches clarify who needs to be involved in different types of decisions and how those decisions get made, but without creating rigid hierarchies. A technical decision might be made entirely by the engineering team, while a pricing decision might involve product, sales, and engineering together. The key is that decisions are made by the people with the most relevant knowledge, not just the highest rank.</p>
<p>These alternative management approaches help prevent another organizational disease that often emerges from command and control structures: internal politics. When organizations create the right structures and incentives, they can minimize the toxic effects of political behavior and create environments where merit and collaboration triumph over manipulation and self-interest.</p>
<h2 id="heading-eliminating-toxic-politics-building-merit-based-organizations">Eliminating Toxic Politics: Building Merit-Based Organizations</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757526852716/614a0de7-0bed-4c4c-9702-051f63b59c63.jpeg" alt class="image--center mx-auto" /></p>
<p>Organizational politics in tech companies is like a virus that spreads quietly but causes tremendous damage. It happens when people advance their careers not by building better products or solving harder problems, but by managing perceptions, building alliances, playing power games, and spreading Peter principle. Picture a company where the best engineers don't get promoted because they're too busy coding, while those who spend their time in meetings talking about other people's work climb the ladder. Or where technical decisions aren't based on what's best for the product, but on who has the most political capital, yes sir folks or who's best friends with the VP.</p>
<p>The toxicity of politics manifests in many ways. You might see engineers spending more time crafting emails or slack messages to look good than writing code. Teams hoard information because sharing knowledge means losing power. Technical debt accumulates because no one wants to own unglamorous but necessary work that won't get them promoted. Innovation dies because new ideas threaten existing power structures. Information must cross multiple hierarchical layers to be on the air, unless it is ignored or modified along the way. The best talent leaves because they're tired of seeing political operators succeed while genuine contributors are overlooked.</p>
<p>So how do you create an environment where politics can't take root? A powerful antidote to politics is radical transparency in decision-making. This doesn't mean every conversation needs to be public, but the principles, processes, and outcomes should be clear and visible to all affected parties. Also, in addition to being futile, multi-layer management is a barrier to transparency and collaboration.</p>
<p>Creating multiple paths for advancement also reduces political behavior. In many tech companies, the only way to advance is to become a manager, which creates perverse incentives for excellent engineers to abandon what they do best. By establishing parallel tracks, where a principal engineer can have the same status and compensation as a director, you remove the pressure to play political games to climb a single ladder. People can succeed by excelling at what they love, whether that's coding, architecture, mentoring, or management.</p>
<p>The connection between eliminating politics and our other principles is clear. When you have transparent decision-making, clear domain boundaries with accountability, and a homogeneous technical culture, there's less room for political maneuvering. When teams have genuine autonomy and alignment, when the management culture is against command and control, there is no room to play political games. When failure is seen as learning rather than blame, people don't need to engage in political cover-ups. Each principle reinforces the others, creating an environment where merit and collaboration naturally flourish.</p>
<h2 id="heading-choosing-openness-transparency-over-opacity">Choosing Openness: Transparency Over Opacity</h2>
<p>Transparency means making information visible and accessible to those who need it. This includes everything from code and documentation to decision-making processes and project status. When teams operate transparently, everyone can see what's happening, why decisions were made, and how they can contribute. Transparency is the foundation that makes collaboration possible and enables the autonomous decision-making we just discussed. Many patterns could help such as ADRs or Arc42.</p>
<p>Opacity, keeping information hidden or hard to find, creates confusion and mistrust. It's like trying to assemble furniture without instructions; you might figure it out eventually, but it takes much longer and causes unnecessary frustration. Transparent organizations move faster because people spend less time searching for information or redoing work that's already been done. More importantly, transparency allows teams to make good decisions independently because they understand the full context of their work and they make the most of other experiences.</p>
<p>Transparency helps teams learn from each other's successes and failures. But this learning process can go wrong when teams start copying practices without understanding why they work. This brings us to a dangerous trap that many organizations fall into.</p>
<h2 id="heading-the-cargo-cult-trap-why-context-matters">The Cargo Cult Trap: Why Context Matters</h2>
<p><img src="https://cdn1.vogel.de/TCiAGESBqdAJ_JxoUgEw9weQgf0=/fit-in/1000x0/p7i.vogel.de/wcms/3a/7b/3a7ba983a485b30b8241eb20dee5c973/0113499464v2.jpeg" alt="Zeremonielles Kreuz des „John Frum Cargo Cult“, auf der Insel Tanna im heutigen Vanuatu." /></p>
<p>Cargo culting in software development happens when teams copy practices from successful companies without understanding the principles behind them. The term comes from Pacific island societies that built replica airports hoping to attract cargo planes, not understanding that it was World War II, not the airports themselves, that brought the supplies.</p>
<p>In tech companies, cargo culting looks like adopting daily stand-up meetings because Google does them, without understanding that stand-ups work when teams need tight coordination. Or implementing microservices because Netflix uses them, ignoring that Netflix has thousands of engineers and your startup has twelve. These teams go through the motions of successful practices but don't get the benefits because they haven't understood the underlying problems these practices were meant to solve.</p>
<p>One of the most common examples of cargo culting is how companies adopt Scrum. They hear that successful companies use Scrum, so they immediately implement all the ceremonies: daily stand-ups, sprint planning, retrospectives, and sprint reviews. They hire Scrum Masters, pay thousands for Jira, create backlogs, and start estimating in story points. But they do all this without understanding the problems Scrum was designed to solve or whether those problems actually exist in their organization. These teams end up spending more time on Scrum ceremonies than actually building software, wondering why this "proven" methodology isn't making them more productive.</p>
<p>The danger of cargo culting is that it gives teams a false sense of progress. They feel like they're following "best practices," but they're really just performing rituals. A company might implement Spotify's squad model without having Spotify's culture of autonomy and trust. They get all the overhead of the new structure but none of the benefits. Even worse, cargo culting can actively harm organizations when copied practices conflict with their actual needs. A small team adopting the communication processes designed for a 500-person organization will drown in unnecessary meetings and documentation.</p>
<p>Instead of blindly copying what successful companies do, teams should understand their own problems first, then look for principles and adapt solutions to their specific context. Ask "what problem were they solving?" not "what did they do?" This thoughtful approach to adopting practices connects directly to another dangerous misconception: the search for silver bullets.</p>
<h2 id="heading-avoiding-silver-bullets-no-single-solution-solves-everything">Avoiding Silver Bullets: No Single Solution Solves Everything</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757530808315/f1735e65-c288-40ae-9a60-2528c5fbd15c.jpeg" alt class="image--center mx-auto" /></p>
<p>A silver bullet in software development is the belief that one tool, technology, or methodology will solve all your problems. It's thinking that adopting Kubernetes or Serverless will fix all your deployment issues, or that switching to microservices will automatically make your system scalable, or that hiring from a specific company will transform your culture overnight.</p>
<p>One of the most pervasive silver bullet beliefs in the industry is around project management tools, particularly Jira. Companies experiencing problems with project visibility, team coordination, or delivery predictability often think, "If we just implement Jira properly, all our project management problems will disappear." They spend months configuring workflows, creating custom fields, setting up dashboards, and training everyone on the tool. But six months later, they find that projects are still late, communication is still poor, and now they have the additional overhead of maintaining a complex Jira setup. The tool didn't fix the underlying problems: unclear requirements, poor stakeholder communication, or unrealistic deadlines. In fact, teams often find that Jira becomes another source of friction, with developers spending hours updating tickets instead of writing code, and managers becoming obsessed with burndown charts that don't reflect actual progress. The real issues like why estimates are always wrong or why requirements keep changing mid-sprint – remain unaddressed because everyone thought the tool would somehow solve these deeper organizational problems.</p>
<p>The silver bullet mentality is seductive because it promises simple solutions to complex problems. Leadership loves silver bullets because they seem like decisive action. "We're moving everything to the cloud!" sounds much better in a board meeting than "We're going to carefully evaluate which of our systems would benefit from cloud hosting and migrate them incrementally over two years."</p>
<p>But real organizations are complex systems with interconnected challenges. Performance problems might stem from poor database design, not your programming language. Team velocity might be slow because of unclear requirements, not your project management tool. Customer satisfaction might be low because of poor product decisions, not your technology stack. When companies chase silver bullets, they often undergo expensive, disruptive changes that don't address their real problems. They might spend months migrating to a new framework while their competitors focus on building better features.</p>
<p>The alternative to silver bullets is understanding that improvement comes from many small, thoughtful changes rather than one dramatic transformation. It means solving specific problems with targeted solutions, measuring results, and adjusting based on what you learn. This measured approach to change connects directly to how we think about failure and learning.</p>
<h2 id="heading-learning-from-failure-the-fail-fast-philosophy">Learning from Failure: The "Fail Fast" Philosophy</h2>
<p>Creating an environment where teams can experiment with solutions, rather than searching for silver bullets or copying others, requires a fundamental shift in how we think about failure. "Fail often, fail fast" doesn't mean being careless. It means trying new things, discovering quickly when they don't work, and adjusting course without blame or shame.</p>
<p>When teams fear blame, they hide problems until they become catastrophes. They avoid trying innovative solutions because the risk of failure seems too high. They might even fall back on cargo culting or silver bullet solutions because these feel safer after all, if everyone else is doing it, you can't be blamed if it doesn't work. But when organizations embrace failure as a learning opportunity, magic happens. Teams experiment freely, problems surface quickly while they're still small, and everyone learns from both successes and mistakes.</p>
<p>This philosophy of learning from failure connects directly to how we build our teams. When people aren't afraid to make mistakes, they're more willing to take on challenges, admit when they need help, and share their struggles with others. This creates an environment where knowledge spreads naturally and no one person becomes a critical dependency.</p>
<h2 id="heading-building-sustainable-teams-beyond-hero-culture">Building Sustainable Teams: Beyond Hero Culture</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757530245831/d83db5df-6ccd-4bbe-8ef5-2b524c1867a0.jpeg" alt class="image--center mx-auto" /></p>
<p>Hero culture emerges when there's a systemic problem or gap in a system and an individual decides to fill that gap. It often manifests as "incident-driven development", when something breaks, everyone stops what they're doing, and the person with the most context becomes the hero who saves the day. While this might seem efficient in the moment, it creates a dangerous pattern.</p>
<p>The problem isn't that someone knowledgeable fixed an issue. It's that organizations start relying on this pattern instead of addressing why problems keep happening. When heroes consistently step in to save the day, they become the sole repositories of critical knowledge, creating dangerous single points of failure. These heroes often work longer hours, take on more stress, and feel pressured to always be available, leading to burnout. Meanwhile, other team members don't develop the skills they need because they rely on the hero to handle difficult situations.</p>
<p>Hero culture creates a self-reinforcing cycle: quick fixes by heroes prioritize immediate solutions over long-term maintainability, leading to more technical debt, which causes more incidents, which requires more heroics. Teams with strong hero cultures experience significantly higher burnout rates than those with distributed responsibility patterns. The organization might praise and recognize crisis management, but nobody asks the critical question: why did this crisis happen in the first place?</p>
<p>Sometimes hero culture emerges from cargo culting. Companies copy the "rockstar developer" culture they've heard about without understanding its problems. Other times, it's seen as a silver bullet, "if we just hire the right genius, all our problems will be solved." But sustainable organizations build systems and processes that address systemic gaps rather than depending on individuals to fill them. They document knowledge, share responsibilities, conduct root cause analysis after incidents, and ensure multiple people can handle critical tasks.</p>
<p>This approach might seem slower at first, but it creates resilient teams that can handle challenges even when key members are absent. When you combine high cohesion and low coupling in your architecture, organize domains around capabilities with clear boundaries and balanced autonomy, prioritize core domains while appropriately managing supporting and generic ones, build a unified technical culture from recruitment through continuous training, eliminate toxic politics through transparency and merit-based systems, break down silos through empowering leadership rather than command and control, maintain transparency, avoid cargo culting and silver bullets, and embrace learning from failure, hero culture naturally disappears. The system itself becomes robust, not dependent on any individual person.</p>
<h2 id="heading-bringing-it-all-together">Bringing It All Together</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757531039507/f8020f43-f3de-4914-9d8d-3020339dcba6.jpeg" alt class="image--center mx-auto" /></p>
<p>These principles form a complete system for building effective tech organizations. We start with the technical foundation of high cohesion and low coupling, apply these principles to how we organize our domains with clear boundaries that enable autonomy while maintaining alignment and accountability. We identify which domains truly matter for our success through careful classification and strategic tools like Wardley mapping. We then break down the walls between teams, but recognize that this requires a unified technical culture built through deliberate recruitment and continuous reinforcement. We replace command and control with empowering management approaches like servant leadership and mission command, while actively eliminating toxic politics through transparency, clear criteria, and merit-based advancement. Through transparency, we enable teams to make good decisions independently, while avoiding the traps of cargo culting (like blindly adopting Scrum) and silver bullets (like expecting Jira to solve all project management problems).</p>
<p>By creating a culture where failure becomes learning and heroes become mentors instead of saviors, we build organizations that can adapt and grow. Each principle supports the others, clear domain boundaries enable autonomy, which requires accountability and alignment. A homogeneous technical culture enables transparency, which prevents politics, which enables autonomous decision-making, which requires moving beyond command and control. Learning from failure reveals that complex problems need multiple solutions (not silver bullets), and sustainable team practices prevent hero culture from taking root.</p>
<p>The journey to implement these principles isn't always easy, and no company gets everything right immediately. The key is to understand your own context, solve your specific problems, and learn from both your own experiences and the principles (not just the practices) of successful organizations. Start with one area, but always keep the bigger picture in mind. As each piece falls into place, the others become easier to implement, creating a positive cycle of improvement that benefits everyone, from individual developers to customers to the business as a whole.</p>
<p>Remember: there's no silver bullet that will transform your organization overnight, and blindly copying what works for others won't solve your unique challenges. But by thoughtfully applying these principles to your specific context, you can build a tech organization that's both effective and sustainable for the long term. The strength doesn't come from any single principle, but from how they work together to create a coherent, resilient system that can handle whatever challenges the future brings.</p>
]]></content:encoded></item><item><title><![CDATA[Why Are Logs Big Lies?]]></title><description><![CDATA[Imagine you're a detective trying to solve a mystery, but every clue you find is either missing important details, points you in the wrong direction, or gets buried under thousands of other confusing clues. This is exactly what working with applicati...]]></description><link>https://akhansari.tech/why-are-logs-big-lies</link><guid isPermaLink="true">https://akhansari.tech/why-are-logs-big-lies</guid><category><![CDATA[logging]]></category><category><![CDATA[observability]]></category><category><![CDATA[error handling]]></category><category><![CDATA[tracing]]></category><dc:creator><![CDATA[Amin Khansari]]></dc:creator><pubDate>Mon, 08 Sep 2025 16:48:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757340992493/c14da697-f96c-4fe8-b496-666d18e3e032.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine you're a detective trying to solve a mystery, but every clue you find is either missing important details, points you in the wrong direction, or gets buried under thousands of other confusing clues. This is exactly what working with application logs feels like for most software developers and system administrators today.</p>
<p>Logs were supposed to be our trusted companions in understanding what happens inside our applications. They should tell us the story of our software's behavior, help us find problems quickly, and guide us toward solutions. Instead, they often become sources of frustration, confusion, and wasted cost and time. Let's explore why this happens and what we can do about it.</p>
<h2 id="heading-the-common-patterns-that-make-logs-unreliable">The Common Patterns That Make Logs Unreliable</h2>
<p>Throughout your career in software development, you've probably encountered several frustrating patterns when dealing with logs. These patterns reveal why logs can become "big lies" rather than helpful truth-tellers.</p>
<p><strong>The Spam Problem</strong> occurs when your systems generate so many error logs that teams simply stop paying attention to them. It's like having a car alarm that goes off constantly - eventually, everyone ignores it, even when there's a real problem. When logs become noise instead of signal, they lose their primary purpose of alerting us to issues that need attention.</p>
<p><strong>The Flood Pattern</strong> happens when one particular type of error suddenly increases dramatically. The team looks at it and thinks, "We've seen this before, it's probably the same old issue," or "Maybe some external service is having problems." Without proper investigation, teams often adopt a wait-and-see approach, hoping the problem will resolve itself. This reactive mindset can lead to hiding critical issues that require immediate attention. And increasing the infrastructure cost.</p>
<p><strong>The Sign of Life Syndrome</strong> represents a twisted relationship with error logs. Teams start believing that constant error logs actually indicate the system is working properly. The logic goes: if we see errors, at least we know the system is running. If we don't see errors, maybe the logging system is broken or the application has stopped entirely. This backward thinking shows how logs have failed in their fundamental role.</p>
<p><strong>The No Value Response</strong> perfectly captures the frustration many teams feel. When someone reports a spike in error logs, the standard response becomes "just restart the service." This approach treats logs as meaningless noise rather than valuable diagnostic information. It's like treating every illness with the same medicine without understanding the symptoms.</p>
<p><strong>The Low Value Usage</strong> describes teams that only check logs when someone reports a bug or when an incident has already occurred. Logs become reactive tools rather than proactive monitoring systems. If the team is lucky, the logs might provide a starting point for investigation, but they're not trusted enough for regular monitoring.</p>
<p><strong>The Low Context Dilemma</strong> happens when logs provide information that's technically accurate but practically useless. You might see an error message that tells you something went wrong, but it doesn't give you enough context to understand why it happened or how to fix it. It's like getting a weather report that says "bad weather" without specifying if it's rain, snow, or a hurricane.</p>
<p><strong>The Correlation Problem</strong> occurs when logs give you enough information to identify which line of code threw an error, but no insight into the chain of events that led to that error. You know what broke, but you don't know why it broke. Understanding the upstream causes becomes nearly impossible without additional context.</p>
<p><strong>The Release Check Burden</strong> shows how logs can become expensive overhead. Teams generate thousands of logs just to verify that everything works correctly for the first few minutes after a release by verifying one or two logs. This creates unnecessary computational and storage costs for limited value.</p>
<p><strong>The Time Travel Impossibility</strong> frustrates teams when they need to investigate issues that occurred weeks ago, only to discover that the relevant logs have been deleted or archived. Critical debugging information disappears just when you need it most.</p>
<p><strong>The Cost Control Conflict</strong> emerges when organizations implement log filtering to control storage and processing costs. Teams suddenly find themselves working with only a few percent of their log data, making thorough investigation nearly impossible.</p>
<p><strong>The Metrics Misuse</strong> happens when teams emit complex textual logs with the intention of parsing them later to generate metrics. This approach is inefficient and error-prone compared to using proper metrics systems.</p>
<p><strong>The Standardization Nightmare</strong> occurs when different parts of the system use different logging formats, making it difficult to search, correlate, or understand log data across the entire system.</p>
<p><strong>The Custom Platform Trap</strong> represents one of the most tricky long-term patterns. Frustrated with existing solutions, teams build custom logging platforms that seem simple at first but gradually evolve into complex systems requiring dedicated maintenance. What starts as "we just need to collect and store logs" becomes a full product with its own roadmap, performance issues, and operational burden. Teams end up spending engineering time on log collectors, storage optimization, and search interfaces instead of their core business, while losing institutional knowledge as developers change. The accidental complexity of maintaining a homegrown logging platform often becomes more problematic than the original issues it was meant to solve, diverting resources from proven solutions that have been battle-tested across thousands of organizations.</p>
<p><strong>The Audit-Tech Confusion</strong> occurs when teams mix business audit logs with technical diagnostic logs in the same systems and formats. Business audit logs serve compliance and business intelligence purposes, recording user actions, entity tracking, and regulatory events that must be preserved and searchable for extended periods. When these two fundamentally different types of logs are confused or combined, teams end up with audit logs that lack business context and compliance rigor, while technical logs become expensive to store and difficult to use for debugging due to irrelevant business data mixed throughout.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757405140826/cbdb96db-fe46-485b-b1ae-0443fc345223.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-understanding-errors-the-heart-of-the-problem">Understanding Errors: The Heart of the Problem</h2>
<p>To understand why logs become unreliable, we need to examine the two main categories of errors and their specific problems.</p>
<h3 id="heading-handled-errors-when-expected-becomes-problematic">Handled Errors: When Expected Becomes Problematic</h3>
<p>Handled errors represent situations where your code anticipates certain problems and deals with them programmatically. However, several symptoms indicate these errors are not serving their intended purpose effectively.</p>
<p><strong>Silent Errors</strong> represent one of the most dangerous patterns. These occur when your application encounters a known error condition but handles it so quietly that no one notices the underlying problem. The error gets logged, but because it's "handled," teams don't prioritize fixing the root cause. Over time, these silent errors can accumulate and indicate deeper systemic issues.</p>
<p><strong>State Machine Breaks</strong> happen when your application's flow or business logic encounters a known error and terminates unexpectedly. While the error is technically handled, it reveals that your state management or workflow design has fundamental flaws. The question becomes: why is the normal flow breaking down, and why aren't there better alternative paths?</p>
<p><strong>Graceful Management Failures</strong> occur when errors are caught and logged but not handled in a user-friendly way. Instead of providing alternative solutions or fallback mechanisms, the application simply records the error and gives up. This approach provides a poor user experience and indicates insufficient error handling design.</p>
<p><strong>Metrics Confusion</strong> happens when teams use error logs for monitoring purposes instead of proper metrics systems. Error logs are not designed to be metrics, and using them this way creates unnecessary overhead and reduces the effectiveness of both logging and monitoring systems.</p>
<p><strong>Exception Handling Confusion</strong> occurs when teams log errors that should actually be handled by higher-level exception management systems. This creates redundant error handling and can mask more serious issues that need immediate attention.</p>
<p><strong>Alert Absence</strong> represents a critical gap where handled errors are logged but don't trigger any notifications or automatic incident creation. If an error is worth logging, it might be worth monitoring, but many teams fail to make this connection.</p>
<h3 id="heading-unhandled-errors-when-the-unexpected-happens">Unhandled Errors: When the Unexpected Happens</h3>
<p>Unhandled errors represent situations where your application encounters problems it wasn't designed to handle. These errors often indicate more serious issues, but they come with their own set of problems.</p>
<p><strong>Incident Management Gaps</strong> occur when unhandled errors don't trigger automatic incident creation, even at low priority levels. These errors represent genuine surprises in your system, and they deserve attention and investigation.</p>
<p><strong>Documentation Deficiency</strong> happens when teams encounter unhandled errors but don't document them for future reference. Without proper documentation, teams repeatedly encounter the same issues without building institutional knowledge about how to handle them.</p>
<p><strong>Error Bombardment</strong> occurs when the same unhandled error repeats rapidly, overwhelming your logging and monitoring systems. This pattern often indicates cascading failures or retry loops that need immediate attention.</p>
<p><strong>Fallback Absence</strong> represents a fundamental design problem where systems have no graceful degradation mechanisms when unexpected errors occur. Instead of failing safely, systems crash or behave unpredictably.</p>
<p><strong>Disaster Recovery Blindness</strong> happens when teams don't understand how unhandled errors relate to their overall system resilience and disaster recovery plans.</p>
<h3 id="heading-information-logs-the-context-problem">Information Logs: The Context Problem</h3>
<p>Information logs should provide valuable insights into your application's behavior, but they often suffer from several critical problems.</p>
<p><strong>Repetitive and Context-less Content</strong> makes information logs difficult to use for actual debugging or understanding. Logs that simply repeat the same messages without providing meaningful context become noise rather than signal.</p>
<p><strong>Production Dependency</strong> occurs when teams rely too heavily on production logs to understand their application's behavior. This dependency often indicates insufficient testing and inadequate understanding of the system's normal operation.</p>
<p><strong>Developer-Centric Thinking</strong> happens when logs reflect the developer's mental model rather than the actual business logic or user experience. These logs make sense to the person who wrote them but provide little value to others trying to understand the system.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757405203252/2f1f3acb-3087-4897-8726-6abe2d6d8955.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-solutions-making-logs-truthful-again">Solutions: Making Logs Truthful Again</h2>
<p>Understanding these problems is the first step toward creating more reliable and valuable logging systems. Let's explore practical solutions that can transform logs from sources of frustration into powerful debugging and monitoring tools.</p>
<h3 id="heading-improving-handled-error-management">Improving Handled Error Management</h3>
<p><strong>The Result Pattern</strong> offers a structured approach to handling expected errors without relying heavily on logging. However, you need to understand when and how to use this pattern effectively.</p>
<p>Avoid using the Result pattern when you need detailed diagnostics about what went wrong. Results are designed for simple success or failure scenarios, not complex debugging situations. Don't use Results to reinvent exception handling mechanisms that already exist in your programming language. Avoid Results when you need your application to fail fast rather than continuing with degraded functionality.</p>
<p>Results are only valuable when someone will actually check and handle the error cases. If your code ignores Result errors, you're not gaining any benefit from this pattern. Be particularly careful when using Results for I/O operations, where exception handling might be more appropriate.</p>
<p><strong>Context Enrichment</strong> involves adding meaningful information to your application logs, traces and spans. Instead of logging bare error messages, include relevant context like user identifiers, request parameters, system state, and the sequence of operations that led to the current situation.</p>
<h3 id="heading-addressing-unhandled-error-issues">Addressing Unhandled Error Issues</h3>
<p><strong>Continuous Improvement</strong> means treating unhandled errors as learning opportunities rather than just problems to fix. Each unhandled error should trigger a review process that asks: how can we prevent this type of error in the future, and how can we handle it more gracefully if it occurs again?</p>
<p><strong>Testing Excellence</strong> involves creating comprehensive test suites that cover not just happy path scenarios but also various failure conditions. Good testing practices help you anticipate and handle more error conditions before they become unhandled surprises in production.</p>
<p><strong>Infrastructure Knowledge</strong> means understanding your deployment environment, dependencies, and operational context well enough to anticipate potential failure modes. Teams that know their infrastructure can design better error handling and recovery mechanisms.</p>
<p><strong>Resilience Patterns</strong> like circuit breakers, fallback mechanisms, and proper disaster recovery planning help your systems handle unexpected errors more gracefully. These patterns reduce the number of truly unhandled errors and improve overall system reliability.</p>
<p><strong>Correlation and Metrics</strong> involve connecting error logs with relevant metrics and tracing information. Instead of isolated error messages, you want error logs that show the broader context of system behavior and performance.</p>
<h3 id="heading-moving-beyond-logs-the-power-of-distributed-tracing">Moving Beyond Logs: The Power of Distributed Tracing</h3>
<p>While improving logging practices can help address many of the problems we've discussed, there's a more fundamental solution that's gaining widespread adoption in modern software development: distributed tracing. Understanding why traces are often superior to logs requires us to think differently about how we observe and understand our applications.</p>
<p><strong>Understanding the Fundamental Difference</strong></p>
<p>To grasp why tracing is superior, imagine you're trying to understand a conversation between multiple people in different rooms. Traditional logging is like having each person write down notes about what they said and when they said it, but without any connection between these notes. You end up with fragments of information scattered across different sources, and piecing together the actual conversation becomes a detective puzzle.</p>
<p>Distributed tracing, on the other hand, is like having a complete transcript that shows not only what each person said and when, but also how each statement relates to the others, who was responding to whom, and the complete flow of the conversation from beginning to end. Each trace represents a complete journey through your system, connecting all the related operations that happen as a result of a single request or user action.</p>
<p><strong>The Correlation Solution</strong></p>
<p>Remember the correlation problem we discussed earlier, where logs tell you what broke but not why it broke? Traces solve this by design. Every trace captures the complete path of execution through your distributed system, showing exactly how different services and components interact with each other. When something goes wrong, you can follow the trace backward to see the entire chain of events that led to the failure.</p>
<p>Consider a typical web application where a user request might travel through a load balancer, an authentication service, a business logic service, a database, and perhaps several external APIs. With traditional logging, each component creates its own log entries, and correlating these entries requires careful timestamp analysis and hoping that you've included the right correlation identifiers in each log message and also hopping that they are all sampled. With tracing, all these operations are automatically connected in a single trace that shows the complete journey, including timing information, error conditions, and the relationships between different steps.</p>
<p><strong>Context Without Effort</strong></p>
<p>One of the most significant advantages of tracing is that it provides rich context automatically. Each span within a trace can include metadata about the operation being performed, the input parameters, the results, and any relevant environmental information. This context is structured and queryable, unlike the free-form text of traditional logs.</p>
<p>When you examine a trace, you can see not just that an error occurred, but also what the user was trying to accomplish, what data was being processed, which code paths were taken, and how long each operation took. This level of context makes debugging and performance optimization much more straightforward than trying to reconstruct the same information from scattered log entries.</p>
<p><strong>Performance Insights Built In</strong></p>
<p>Traditional logs often require separate metrics and monitoring systems to understand performance characteristics. Traces include timing information by design, showing you exactly how long each operation took and where bottlenecks are occurring in your system. You can identify slow database queries, inefficient API calls, or unexpected delays without having to instrument your code with additional performance logging.</p>
<p>This timing information is particularly valuable because it's automatically correlated with the business context of each request. Instead of seeing abstract performance metrics, you can understand how performance issues affect real user scenarios and business operations.</p>
<p><strong>Sampling Intelligence</strong></p>
<p>While we mentioned sampling as a solution for managing log volume, tracing systems implement more sophisticated sampling strategies. Instead of randomly discarding information, tracing systems can use intelligent sampling that ensures you capture representative examples of different types of operations while always preserving traces that contain errors or performance anomalies.</p>
<p>This approach means you get comprehensive coverage of your system's behavior without the storage and processing overhead of capturing every single operation. The sampling decisions are made at the trace level rather than at individual log entry level, ensuring that you never lose the complete picture of any captured request.</p>
<p><strong>Breaking Down Silos</strong></p>
<p>Traditional logging often creates information silos where each service or component logs independently. Even with good correlation identifiers, understanding cross-service interactions requires manual effort and domain knowledge. Traces naturally break down these silos by representing operations that span multiple services as unified, connected experiences.</p>
<p>This unified view is particularly valuable in multi-services architectures, where a single user request might touch dozens of different services. With tracing, you can follow the complete journey through your entire system without having to know which services are involved or how they're connected.</p>
<p><strong>Better Alerting and Monitoring</strong></p>
<p>Because traces capture complete user journeys, they enable more intelligent alerting strategies. Instead of alerting on individual log entries or isolated metrics, you can create alerts based on complete user experience scenarios. For example, you can alert when checkout processes are failing end-to-end, even if individual services appear to be functioning normally.</p>
<p>This approach reduces false alarms and ensures that your monitoring focuses on actual user impact rather than technical implementation details.</p>
<p><strong>The Learning Advantage</strong></p>
<p>Perhaps most importantly, traces help teams develop better understanding of their systems over time. When investigating issues or optimizing performance, traces provide educational value that logs simply cannot match. Team members can see how requests flow through the system, understand the relationships between different components, and develop intuition about normal versus abnormal system behavior.</p>
<p>This learning aspect is particularly valuable for onboarding new team members or understanding unfamiliar parts of the system. A few minutes exploring traces can provide insights that might take hours to gather from traditional logs and documentation.</p>
<p><strong>Making the Transition</strong></p>
<p>Moving from logs to traces doesn't have to be an all-or-nothing decision. Modern tracing systems can coexist with traditional logging, and many teams start by implementing tracing for their most critical user journeys while gradually expanding coverage. The key is to start thinking about observability in terms of user experiences and business operations rather than individual technical events.</p>
<p>When you make this mental shift, you'll find that many of the logging problems we discussed earlier simply disappear. Correlation becomes automatic, context becomes rich and structured, and the signal-to-noise ratio improves dramatically because you're focusing on complete meaningful operations rather than fragmented technical events.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757405232077/fd56136e-8afe-4053-99f3-40c0c65dfd1a.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion-three-principles-for-honest-observability">Conclusion: Three Principles for Honest Observability</h2>
<p>The transformation from unreliable logs to trustworthy system understanding rests on three fundamental principles that address the root causes of why logs become lies.</p>
<p><strong>First, prioritize better error handling over more logging.</strong> Handle errors properly at their source with fallback mechanisms and recovery strategies rather than simply recording them and hoping someone will notice later. This shifts your focus from reactive debugging to proactive system design, where errors become manageable events rather than sources of confusion.</p>
<p><strong>Second, favor application logic over logging logic.</strong> Many logging problems stem from trying to solve business problems through logging rather than clear application design and proper unit testing. When your code and tests clearly expresses what it does and why, you need fewer logs to understand system behavior. Design for transparency from the beginning rather than trying to reconstruct understanding through scattered log entries.</p>
<p><strong>Third, embrace distributed tracing as your observability foundation.</strong> Tracing automatically provides the correlation, context, and complete picture that traditional logs struggle to deliver. By capturing entire user journeys through your system, tracing eliminates the fundamental problems that make logs unreliable while providing richer insights with less effort.</p>
<p>These principles work together synergistically. Better error handling reduces diagnostic logging needs, clearer application logic makes system behavior transparent, and distributed tracing provides comprehensive visibility into remaining interactions. Apply them gradually, starting where they can have the most immediate impact, and build toward systems that tell the truth about their behavior rather than obscuring it behind confusing information.</p>
]]></content:encoded></item><item><title><![CDATA[F# Features I Love]]></title><description><![CDATA[From 2018 to 2023, I worked full-time with F#, contributing to companies with revenues up to $3 billion and working on critical codebases reaching 100k lines of code. Here’s my feedback on why I enjoy this language so much.
Each F# code example is pa...]]></description><link>https://akhansari.tech/fsharp-features-i-love</link><guid isPermaLink="true">https://akhansari.tech/fsharp-features-i-love</guid><category><![CDATA[programming languages]]></category><category><![CDATA[Functional Programming]]></category><category><![CDATA[F#]]></category><category><![CDATA[#fsharp]]></category><dc:creator><![CDATA[Amin Khansari]]></dc:creator><pubDate>Tue, 07 Jan 2025 19:27:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/XtUd5SiX464/upload/b9471e40c4b7fe0e368264fbc72a738d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>From 2018 to 2023, I worked full-time with F#, contributing to companies with revenues up to $3 billion and working on critical codebases reaching 100k lines of code. Here’s my feedback on why I enjoy this language so much.</p>
<p>Each F# code example is paired with its equivalent in TypeScript. The aim is not to compare both languages but to provide a reference point.</p>
<h2 id="heading-less-visual-and-noise-pollution">Less Visual and Noise Pollution</h2>
<p>F# has a minimalistic philosophy. By avoiding or making optional unnecessary keywords or symbols like colons <code>:</code>, semicolons <code>;</code>, braces <code>{ }</code>, parentheses <code>( )</code>, and virgules <code>,</code>. This offers a cleaner and more readable codebase, allowing developers to focus on the logic and the structure of the program rather than wrestling with syntax and continuously typing the same characters. F# is curly only when it’s useful.</p>
<pre><code class="lang-fsharp"><span class="hljs-comment">// F#</span>
<span class="hljs-keyword">let</span> mean values =
    <span class="hljs-keyword">let</span> total = List.sum values
    total / values.Length
</code></pre>
<pre><code class="lang-typescript"><span class="hljs-comment">// TypeScript</span>
<span class="hljs-keyword">const</span> mean = (values: <span class="hljs-built_in">number</span>[]): <span class="hljs-function"><span class="hljs-params">number</span> =&gt;</span> {
    <span class="hljs-keyword">const</span> total = values.reduce(<span class="hljs-function">(<span class="hljs-params">sum, val</span>) =&gt;</span> sum + val, <span class="hljs-number">0</span>);
    <span class="hljs-keyword">return</span> total / values.length;
};
</code></pre>
<h2 id="heading-dependency-order">Dependency Order</h2>
<p>At first sight, this feature seems surprising and unusual, but this is actually one of my favorites.<br />F# doesn’t allow forward references, meaning you cannot use a type, function, or module before it has been defined. And the code must be structured such that dependencies are introduced before they are used.</p>
<p>Not only does this prevent circular dependencies, but it also allows the code to be read sequentially, like a book. When I read F# code, I don’t need to endlessly navigate or scroll up or down, in order to understand it. I can read it like a novel or just go to the end to grasp the goal.</p>
<p>As for writing code, dependency order encourages me to think modular, to compose little functions or types from other little functions or types. Now that I’ve learned F#, I even apply this pattern with other languages that do have forward references.</p>
<p>For more info: <a target="_blank" href="https://swlaschin.gitbooks.io/fsharpforfunandprofit/content/posts/cyclic-dependencies.html">Cyclic dependencies are evil</a>.</p>
<h2 id="heading-expression-oriented">Expression Oriented</h2>
<p>In F#, the primary focus is on <em>expressions</em> rather than <em>statements</em>.</p>
<p>Being <a target="_blank" href="https://en.wikipedia.org/wiki/Expression-oriented_programming_language">expression-oriented</a> in programming means that the language is designed in a way where nearly everything (control flow, bindings, and even code blocks) produces a value. This enables immutability, consistency, and composition.</p>
<p>A statement, on the other hand, is a construct that performs an action but does not produce a value. It is used for side effects, like modifying a variable or printing to the screen.</p>
<p>In the following code we can use a mapped type but the goal here is to compare expressions and statements with a sample.</p>
<pre><code class="lang-fsharp"><span class="hljs-keyword">let</span> calculateDiscount customerType orderAmount =

    <span class="hljs-comment">// the first expression is the pattern matching control flow</span>
    <span class="hljs-comment">// then its value is multiplied by the order amount</span>
    <span class="hljs-comment">// and finally the entire code block result is set to the base discount</span>
    <span class="hljs-keyword">let</span> baseDiscount =
        <span class="hljs-keyword">match</span> customerType <span class="hljs-keyword">with</span>
        | Regular -&gt; <span class="hljs-number">0.05</span> <span class="hljs-comment">// 5%</span>
        | Premium -&gt; <span class="hljs-number">0.10</span> <span class="hljs-comment">// 10%</span>
        * orderAmount

    <span class="hljs-keyword">let</span> additionalDiscount =
        <span class="hljs-keyword">if</span> orderAmount &gt;= <span class="hljs-number">1000.0</span> <span class="hljs-keyword">then</span> <span class="hljs-number">0.05</span> * orderAmount <span class="hljs-keyword">else</span> <span class="hljs-number">0.0</span>

    baseDiscount + additionalDiscount
</code></pre>
<pre><code class="lang-typescript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">calculateDiscount</span>(<span class="hljs-params">customerType: CustomerType, orderAmount: <span class="hljs-built_in">number</span></span>): <span class="hljs-title">number</span> </span>{

    <span class="hljs-keyword">let</span> baseDiscountRate: <span class="hljs-built_in">number</span> = <span class="hljs-number">0</span>;
    <span class="hljs-keyword">switch</span> (customerType) {
        <span class="hljs-keyword">case</span> CustomerType.Regular:
            baseDiscountRate = <span class="hljs-number">0.05</span>; <span class="hljs-comment">// 5%</span>
            <span class="hljs-keyword">break</span>;
        <span class="hljs-keyword">case</span> CustomerType.Premium:
            baseDiscountRate = <span class="hljs-number">0.10</span>; <span class="hljs-comment">// 10%</span>
            <span class="hljs-keyword">break</span>;
    }
    <span class="hljs-keyword">const</span> baseDiscount = baseDiscountRate * orderAmount;

    <span class="hljs-keyword">const</span> additionalDiscount =
        orderAmount &gt;= <span class="hljs-number">1000</span> ? <span class="hljs-number">0.05</span> * orderAmount : <span class="hljs-number">0</span>;

    <span class="hljs-keyword">return</span> baseDiscount + additionalDiscount;
}
</code></pre>
<p>For more info: <a target="_blank" href="https://swlaschin.gitbooks.io/fsharpforfunandprofit/content/posts/expressions-vs-statements.html">Expressions vs. statements</a>.</p>
<h2 id="heading-functions-as-first-class-citizens">Functions as First-Class Citizens</h2>
<p>One of the core principles of functional programming is that functions should be treated as <a target="_blank" href="https://en.wikipedia.org/wiki/First-class_function">first-class citizens</a>, like ordinary variables with a function type, which means they can be assigned to variables, passed as arguments, returned from other functions, and stored in data structures. Design patterns slip away to make room for just functions.</p>
<pre><code class="lang-fsharp"><span class="hljs-comment">// Assign to a variable</span>

<span class="hljs-keyword">let</span> applyPercentageDiscount discountRate price =
    price * (<span class="hljs-number">1</span>m - discountRate)

<span class="hljs-keyword">let</span> applyFlatDiscount discountAmount price =
    price - discountAmount

<span class="hljs-comment">// Pass as an argument</span>

<span class="hljs-keyword">let</span> calculateTotal applyDiscount prices =
    prices |&gt; List.map applyDiscount |&gt; List.sum

<span class="hljs-comment">// Return from another function</span>

<span class="hljs-keyword">let</span> welcomeDiscount =
    applyFlatDiscount <span class="hljs-number">10</span>m

<span class="hljs-keyword">let</span> createBlackFridayDiscount () =
    <span class="hljs-keyword">let</span> now = DateTime.Now.Date
    <span class="hljs-keyword">if</span> now = blackFridayOf now.Year
    <span class="hljs-keyword">then</span> applyPercentageDiscount <span class="hljs-number">50</span>m
    <span class="hljs-keyword">else</span> <span class="hljs-keyword">fun</span> price -&gt; price <span class="hljs-comment">// id</span>

<span class="hljs-comment">// Store in a data structure</span>

<span class="hljs-keyword">let</span> discountStrategies =
    [ welcomeDiscount
      createBlackFridayDiscount () ]
</code></pre>
<pre><code class="lang-typescript"><span class="hljs-keyword">type</span> ApplyDiscount = <span class="hljs-function">(<span class="hljs-params">price: <span class="hljs-built_in">number</span></span>) =&gt;</span> <span class="hljs-built_in">number</span>;

<span class="hljs-comment">// Assign to a variable</span>

<span class="hljs-keyword">const</span> applyPercentageDiscount = (discountRate: <span class="hljs-built_in">number</span>, price: <span class="hljs-built_in">number</span>): <span class="hljs-function"><span class="hljs-params">number</span> =&gt;</span>
    price * (<span class="hljs-number">1</span> - discountRate);

<span class="hljs-keyword">const</span> applyFlatDiscount = (discountAmount: <span class="hljs-built_in">number</span>, price: <span class="hljs-built_in">number</span>): <span class="hljs-function"><span class="hljs-params">number</span> =&gt;</span>
    price - discountAmount;

<span class="hljs-comment">// Pass as an argument</span>

<span class="hljs-keyword">const</span> calculateTotal = (applyDiscount: ApplyDiscount, prices: <span class="hljs-built_in">number</span>[]): <span class="hljs-function"><span class="hljs-params">number</span> =&gt;</span>
    prices.map(applyDiscount).reduce(<span class="hljs-function">(<span class="hljs-params">a, b</span>) =&gt;</span> a + b, <span class="hljs-number">0</span>);

<span class="hljs-comment">// Return from another function</span>

<span class="hljs-keyword">const</span> welcomeDiscount: ApplyDiscount = <span class="hljs-function">(<span class="hljs-params">price: <span class="hljs-built_in">number</span></span>) =&gt;</span>
    applyFlatDiscount(<span class="hljs-number">10</span>, price);

<span class="hljs-keyword">const</span> createBlackFridayDiscount = (): <span class="hljs-function"><span class="hljs-params">ApplyDiscount</span> =&gt;</span> {
    <span class="hljs-keyword">const</span> now = dayjs();
    <span class="hljs-keyword">if</span> (now.isSame(blackFridayOf(now.year()), <span class="hljs-string">"day"</span>)) {
        <span class="hljs-keyword">return</span> <span class="hljs-function">(<span class="hljs-params">price: <span class="hljs-built_in">number</span></span>) =&gt;</span> applyPercentageDiscount(<span class="hljs-number">0.5</span>, price);
    } <span class="hljs-keyword">else</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-function">(<span class="hljs-params">price: <span class="hljs-built_in">number</span></span>) =&gt;</span> price;
    }
}

<span class="hljs-comment">// Store in a data structure</span>

<span class="hljs-keyword">const</span> discountStrategies = [
    welcomeDiscount,
    createBlackFridayDiscount(),
];
</code></pre>
<h2 id="heading-curried-by-default">Curried by default</h2>
<p>In F#, functions are curried by default, meaning they inherently take arguments one at a time, returning a new function for any remaining arguments. This design offers several advantages:</p>
<ul>
<li><p>Makes partial application easy and allows dependency isolation.</p>
</li>
<li><p>Improves function composition and modularity.</p>
</li>
<li><p>Enables pipeline oriented programming.</p>
</li>
</ul>
<pre><code class="lang-fsharp"><span class="hljs-comment">// default</span>
<span class="hljs-keyword">let</span> add x y = x + y
<span class="hljs-comment">// explicitly curried</span>
<span class="hljs-keyword">let</span> add x = <span class="hljs-keyword">fun</span> y -&gt; x + y

<span class="hljs-keyword">let</span> result = add <span class="hljs-number">2</span> <span class="hljs-number">3</span>
<span class="hljs-keyword">let</span> addTwo = add <span class="hljs-number">2</span>
</code></pre>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> add = <span class="hljs-function">(<span class="hljs-params">x: <span class="hljs-built_in">number</span></span>) =&gt;</span> (y: <span class="hljs-built_in">number</span>) =&gt; x + y;
<span class="hljs-keyword">const</span> result = add(<span class="hljs-number">2</span>)(<span class="hljs-number">3</span>);
<span class="hljs-keyword">const</span> addTwo = add(<span class="hljs-number">2</span>);
</code></pre>
<p>For more info: <a target="_blank" href="https://swlaschin.gitbooks.io/fsharpforfunandprofit/content/posts/currying.html">Currying</a>.</p>
<h2 id="heading-strongly-typed-and-type-inference">Strongly Typed and Type Inference</h2>
<p>It's occurred to me many times that I do a refactoring in a F# codebase during days, and at the end I run the tests and everything is green. It's amazing how confident we are during major refactorings. This is possible thanks to F#’s strongly typed system and its type inference feature.<br />While one allows us to rely on the compile-time safety, the other allows us to focus on logic and not types. The perfect balance between safety and flexibility.</p>
<p>The other super power I love is the domain modeling excellence. The strongly typed system lets us model complex domains, while type inference allows us to do so without making our code messy with annotations.</p>
<p>I highly recommend reading this excellent book: <a target="_blank" href="https://fsharpforfunandprofit.com/books/">Domain Modeling Made Functional</a>.</p>
<h2 id="heading-pipeline-oriented">Pipeline Oriented</h2>
<p>Pipeline-oriented programming in F# is a style of programming that utilizes the <code>|&gt;</code> operator to create clear and concise code by chaining function calls. This approach focuses on the flow of data through a sequence of transformations or operations, making the code more declarative and readable.</p>
<p>This could lead to the <a target="_blank" href="https://swlaschin.gitbooks.io/fsharpforfunandprofit/content/posts/recipe-part2.html">Railway oriented programming</a> which is an elegant pattern for chaining together error-generating functions in a clean and composable way.</p>
<pre><code class="lang-fsharp"><span class="hljs-keyword">let</span> processOrders rawOrders =
    rawOrders
    |&gt; read
    |&gt; List.choose parseOrder <span class="hljs-comment">// Parse and filter invalid orders</span>
    |&gt; List.map calculateTotal <span class="hljs-comment">// Calculate total price for valid orders</span>
    |&gt; List.sortBy _.TotalPrice <span class="hljs-comment">// Sort orders by total price</span>

<span class="hljs-comment">// alternative way with function composition</span>
<span class="hljs-keyword">let</span> processOrders =
    read
    &gt;&gt; List.choose parseOrder
    &gt;&gt; List.map calculateTotal
    &gt;&gt; List.sortBy _.TotalPrice
</code></pre>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> processOrders = <span class="hljs-function">(<span class="hljs-params">rawOrders: Order[]</span>) =&gt;</span>
    read(rawOrders)
        .map(parseOrder)
        .filter((order): order is Order =&gt; order !== <span class="hljs-literal">null</span>)
        .map(calculateTotal)
        .sort(<span class="hljs-function">(<span class="hljs-params">a, b</span>) =&gt;</span> a.TotalPrice - b.TotalPrice);
</code></pre>
<h2 id="heading-structural-equality">Structural Equality</h2>
<p>Structural equality aligns well with immutability and refers to comparing two objects based on their actual content or structure, rather than their references or memory addresses. This is a key to enable consistent behavior, making the comparison predictable and reliable.</p>
<p>This is a valuable asset for domain modeling and tests, especially when we focus on the behavior and not its implementation. Once we have tasted this feature, it’s pretty hard to think differently.</p>
<p>Most F# types have built-in immutability, equality, comparison, and pretty printing. And it’s easily possible to change these type behaviors with attributes.</p>
<p>For more info: <a target="_blank" href="https://www.craigstuntz.com/posts/2020-03-09-equality-is-hard.html">Equality is Hard</a> and <a target="_blank" href="https://swlaschin.gitbooks.io/fsharpforfunandprofit/content/posts/convenience-types.html">Out-of-the-box behavior for types</a>.</p>
<pre><code class="lang-fsharp"><span class="hljs-comment">// all the following expressions are true</span>
[<span class="hljs-number">3</span>; <span class="hljs-number">2</span>; <span class="hljs-number">1</span>] = [<span class="hljs-number">3</span>; <span class="hljs-number">2</span>; <span class="hljs-number">1</span>] <span class="hljs-comment">// list</span>
{ Amount = <span class="hljs-number">10</span>m; Currency = EUR } = { Amount = <span class="hljs-number">10</span>m; Currency = EUR } <span class="hljs-comment">// record</span>
(<span class="hljs-number">7</span>, <span class="hljs-string">"hello"</span>, <span class="hljs-keyword">true</span>) = (<span class="hljs-number">7</span>, <span class="hljs-string">"hello"</span>, <span class="hljs-keyword">true</span>) <span class="hljs-comment">// tuple</span>
<span class="hljs-class"><span class="hljs-keyword">type</span> <span class="hljs-title">Cart</span> </span>= Jack | Queen | King <span class="hljs-comment">// union</span>
King &gt; Jack
</code></pre>
<p>As this is not aimed to be supported by JavaScript, there is no sample for it. But there are some hacky way to make a deep comparison such as Lodash library.</p>
<h2 id="heading-pattern-matching-and-active-patterns">Pattern Matching and Active Patterns</h2>
<p>Pattern Matching in F# is extremely valuable and it’s on the 3 of the features that I use the most.<br />Many patterns are available by default, such as constant, identifier, variable, list, array, tuple, record, etc. And it’s possible to enrich or to group this with Active Patterns in order to make very complex rules readable and understandable. When I go back to read mainstream languages code, and I see crazy branching with if/else/switch, it hurts my eyes and my brain.</p>
<p>Another benefit is the feedback on incomplete pattern matches. While with mainstream languages we could have this during the runtime by throwing an error, with F# we can rely on the compiler, linter, or the language server. One of many reasons that make refactoring more affordable.</p>
<pre><code class="lang-fsharp"><span class="hljs-keyword">let</span> (|Strike|Spare|Open|Last|) rolls =
    <span class="hljs-keyword">match</span> rolls <span class="hljs-keyword">with</span>
    | <span class="hljs-number">10</span> :: rest -&gt; Strike rest
    | r1 :: r2 :: rest <span class="hljs-keyword">when</span> r1 + r2 = <span class="hljs-number">10</span> -&gt; Spare (r1, r2, rest)
    | r1 :: r2 :: rest -&gt; Open (r1, r2, rest)
    | _ -&gt; Last rolls

<span class="hljs-keyword">let</span> calculateBowlingScore rolls =
    <span class="hljs-keyword">let</span> <span class="hljs-keyword">rec</span> score frames total rolls =
        <span class="hljs-keyword">match</span> frames, rolls <span class="hljs-keyword">with</span>
        | <span class="hljs-number">0</span>, _ -&gt;
            total
        | _, Strike nextRolls -&gt; 
            <span class="hljs-keyword">let</span> bonus = nextRolls |&gt; List.take <span class="hljs-number">2</span> |&gt; List.sum
            score (frames - <span class="hljs-number">1</span>) (total + <span class="hljs-number">10</span> + bonus) nextRolls
        | _, Spare (r1, r2, nextRolls) -&gt;
            <span class="hljs-keyword">let</span> bonus = nextRolls |&gt; List.tryHead |&gt; Option.defaultValue <span class="hljs-number">0</span>
            score (frames - <span class="hljs-number">1</span>) (total + <span class="hljs-number">10</span> + bonus) nextRolls
        | _, Open (r1, r2, nextRolls) -&gt;
            score (frames - <span class="hljs-number">1</span>) (total + r1 + r2) nextRolls
        | _, Last lastRolls -&gt;
            total + List.sum lastRolls
    score <span class="hljs-number">10</span> <span class="hljs-number">0</span> rolls
</code></pre>
<pre><code class="lang-typescript"><span class="hljs-keyword">type</span> Frame =
  | { kind: <span class="hljs-string">"Strike"</span>; rolls: <span class="hljs-built_in">number</span>[] }
  | { kind: <span class="hljs-string">"Spare"</span>; rolls: <span class="hljs-built_in">number</span>[] }
  | { kind: <span class="hljs-string">"Open"</span>; rolls: <span class="hljs-built_in">number</span>[] }
  | { kind: <span class="hljs-string">"Last"</span>; rolls: <span class="hljs-built_in">number</span>[] };

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">classifyRolls</span>(<span class="hljs-params">rolls: <span class="hljs-built_in">number</span>[]</span>): <span class="hljs-title">Frame</span> </span>{
  <span class="hljs-keyword">if</span> (rolls[<span class="hljs-number">0</span>] === <span class="hljs-number">10</span>) {
    <span class="hljs-keyword">return</span> { kind: <span class="hljs-string">"Strike"</span>, rolls: rolls.slice(<span class="hljs-number">1</span>) };
  } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (rolls[<span class="hljs-number">0</span>] + rolls[<span class="hljs-number">1</span>] === <span class="hljs-number">10</span>) {
    <span class="hljs-keyword">return</span> { kind: <span class="hljs-string">"Spare"</span>, rolls: rolls.slice(<span class="hljs-number">2</span>) };
  } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (rolls.length &gt;= <span class="hljs-number">2</span>) {
    <span class="hljs-keyword">return</span> { kind: <span class="hljs-string">"Open"</span>, rolls: rolls.slice(<span class="hljs-number">2</span>) };
  } <span class="hljs-keyword">else</span> {
    <span class="hljs-keyword">return</span> { kind: <span class="hljs-string">"Last"</span>, rolls };
  }
}

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">calculateBowlingScore</span>(<span class="hljs-params">baseRolls: <span class="hljs-built_in">number</span>[]</span>): <span class="hljs-title">number</span> </span>{
  <span class="hljs-keyword">const</span> score = (frames: <span class="hljs-built_in">number</span>, total: <span class="hljs-built_in">number</span>, rolls: <span class="hljs-built_in">number</span>[]): <span class="hljs-function"><span class="hljs-params">number</span> =&gt;</span> {
    <span class="hljs-keyword">if</span> (frames === <span class="hljs-number">0</span> || rolls.length === <span class="hljs-number">0</span>) <span class="hljs-keyword">return</span> total;
    <span class="hljs-keyword">const</span> frame = classifyRolls(rolls);
    <span class="hljs-keyword">switch</span> (frame.kind) {
      <span class="hljs-keyword">case</span> <span class="hljs-string">"Strike"</span>: {
        <span class="hljs-keyword">const</span> bonus = rolls.slice(<span class="hljs-number">1</span>, <span class="hljs-number">3</span>).reduce(<span class="hljs-function">(<span class="hljs-params">sum, roll</span>) =&gt;</span> sum + (roll || <span class="hljs-number">0</span>), <span class="hljs-number">0</span>);
        <span class="hljs-keyword">return</span> score(frames - <span class="hljs-number">1</span>, total + <span class="hljs-number">10</span> + bonus, frame.rolls);
      }
      <span class="hljs-keyword">case</span> <span class="hljs-string">"Spare"</span>: {
        <span class="hljs-keyword">const</span> bonus = rolls[<span class="hljs-number">2</span>] || <span class="hljs-number">0</span>;
        <span class="hljs-keyword">return</span> score(frames - <span class="hljs-number">1</span>, total + <span class="hljs-number">10</span> + bonus, frame.rolls);
      }
      <span class="hljs-keyword">case</span> <span class="hljs-string">"Open"</span>: {
        <span class="hljs-keyword">const</span> frameScore = rolls[<span class="hljs-number">0</span>] + rolls[<span class="hljs-number">1</span>];
        <span class="hljs-keyword">return</span> score(frames - <span class="hljs-number">1</span>, total + frameScore, frame.rolls);
      }
      <span class="hljs-keyword">case</span> <span class="hljs-string">"Last"</span>: {
        <span class="hljs-keyword">return</span> total + frame.rolls.reduce(<span class="hljs-function">(<span class="hljs-params">sum, roll</span>) =&gt;</span> sum + roll, <span class="hljs-number">0</span>);
      }
    }
  };

  <span class="hljs-keyword">return</span> score(<span class="hljs-number">10</span>, <span class="hljs-number">0</span>, baseRolls);
}
</code></pre>
<p>For more info: <a target="_blank" href="https://learn.microsoft.com/en-us/dotnet/fsharp/language-reference/pattern-matching">Pattern Matching</a></p>
<h2 id="heading-computation-expressions-and-comprehensions">Computation Expressions and Comprehensions</h2>
<p>Computation Expressions (CEs) are the Swiss army knife of F#. They allow us to define and customize the behavior of a series of computations, encapsulating additional logic such as error handling, state tracking, asynchronous operations, sequence generation, or powerful DSLs.<br />When CEs have <code>For</code>, <code>Combine</code>, <code>Yield</code>, and <code>Zero</code> methods, they can be identified as comprehensions, like List, Array and Sequence comprehensions.</p>
<p>Here is an example of a custom CE of a neat behavior testing:</p>
<pre><code class="lang-fsharp"><span class="hljs-keyword">let</span> spec = DeciderSpecfication (State.initial, evolve, decide)

<span class="hljs-meta">[&lt;Fact&gt;]</span>
<span class="hljs-keyword">let</span> ``negative balance cannot be closed`` () =
    spec {
        Given [ Withdrawn { Amount = <span class="hljs-number">50</span>m; Date = DateTime.MinValue } ]
        When (Close)
        Then (ClosingError (BalanceIsNegative <span class="hljs-number">-50</span>m))
    }
</code></pre>
<p>This subject is too big to be elaborated in just one article. For more info:</p>
<ul>
<li><p><a target="_blank" href="https://github.com/dsyme/dsyme-presentations/blob/master/design-notes/ces-compared.md">F# Computation Expressions, 'do' notation and list comprehensions</a></p>
</li>
<li><p><a target="_blank" href="https://swlaschin.gitbooks.io/fsharpforfunandprofit/content/series/computation-expressions.html">The "Computation Expressions" Series</a></p>
</li>
</ul>
<h2 id="heading-fable-and-bolero">Fable and Bolero</h2>
<p>F# is a full stack language. With <a target="_blank" href="https://fable.io/">Fable</a> it’s possible to transpile to JavaScript, Rust, etc, and with <a target="_blank" href="https://fsbolero.io/">Bolero</a> to compile to Wasm. Enabling F# developers to write full-stack applications in a single language. Understanding Elmish could be hard at the beginning but once we understood how it works, it’s so intuitive that you wonder why all frontend views aren’t built this way.</p>
<h2 id="heading-missing-features">Missing features</h2>
<blockquote>
<p>I don't want F# to be the kind of language where the most empowered person in the discord chat is the category theorist. - Don Syme</p>
</blockquote>
<p>I really appreciate this decision-making. F# is among the most accessible functional programming family. On the other hand, to some extent, I feel limited. That forces me to write more boilerplate code or to fall back and use a more object oriented approach.</p>
<ul>
<li><p>Type classes and traits (<a target="_blank" href="https://github.com/fsharp/fslang-suggestions/issues/243">suggestion</a>)</p>
</li>
<li><p>Generalized algebraic data types (<a target="_blank" href="https://github.com/fsharp/fslang-suggestions/issues/179">suggestion</a>)</p>
</li>
<li><p>Higher kinded types (<a target="_blank" href="https://github.com/fsharp/fslang-suggestions/issues/175">suggestion</a>)</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>F# is more than just a programming language, it’s a gateway to a new way of thinking about software design. By prioritizing clarity, immutability, and composability without sacrificing simplicity. F# enables us to write clean, efficient, and sustainable codebases. Compared to many mainstream languages, F# reduces the cognitive load by design. It’s such a joy to know that the code you have to release will require the minimum maintenance, and if needed the minimum effort to add features or to refactor. Of course there are some bad practices to avoid.</p>
<p>For those accustomed to imperative or object-oriented languages, adopting F# may seem like a leap. Yet, it’s a leap worth taking. F# provides not only a practical alternative for everyday tasks but also a richer toolkit for solving complex problems with ease. Whether you’re building backend services, exploring domain modeling, or creating full-stack applications, F# is an investment in writing better and more reliable code.</p>
<p>If you’re curious about functional programming or looking for a language that balances elegance with pragmatism, F# is ready to surprise you. Give it a try, you may never look at code the same way again.</p>
]]></content:encoded></item><item><title><![CDATA[Event Sourcing: A Matter of Definition]]></title><description><![CDATA[Event Sourcing is a straightforward pattern to both use and understand. However, why do we encounter numerous strong opinions, challenges, or even failures? The primary reason is usually due to confusion. In this article, I will attempt to elaborate ...]]></description><link>https://akhansari.tech/event-sourcing-a-matter-of-definition</link><guid isPermaLink="true">https://akhansari.tech/event-sourcing-a-matter-of-definition</guid><category><![CDATA[Event Sourcing]]></category><dc:creator><![CDATA[Amin Khansari]]></dc:creator><pubDate>Wed, 20 Nov 2024 16:02:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/YbmY_RGLUQA/upload/d48d0c2a5e8ec41fbf5a7fdc2d1639a3.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Event Sourcing is a straightforward <strong>pattern</strong> to both use and understand. However, why do we encounter numerous strong opinions, challenges, or even failures? The primary reason is usually due to confusion. In this article, I will attempt to elaborate on my thoughts by demystifying the definitions in order to mitigate this confusion.</p>
<h2 id="heading-the-definition">The Definition</h2>
<p>Thanks to the community's advocacy, and particularly the efforts of Oskar Dudycz, we can observe a decrease in the level of confusion between Event Sourcing and <a target="_blank" href="https://event-driven.io/en/event_streaming_is_not_event_sourcing/">Event Streaming</a> or <a target="_blank" href="https://event-driven.io/en/dont_let_event_driven_architecture_buzzwords_fool_you/">Event Driven</a>. Sometimes, it's also easier to grasp a pattern when we initially learn <a target="_blank" href="https://event-driven.io/en/when_not_to_use_event_sourcing/">when it should not be applied</a>.</p>
<p>To make it concise, Mathias Verraes's <a target="_blank" href="https://verraes.net/2019/08/eventsourcing-state-from-events-vs-events-as-state/">definition</a> is exceptionally clear:</p>
<blockquote>
<p>A system is Event Sourced when:</p>
<ul>
<li><p>the single source of truth is a persisted history of the system's events;</p>
</li>
<li><p>and that history is taken into account for enforcing constraints on new events.</p>
</li>
</ul>
</blockquote>
<p>If the Event Sourcing definition is well-understood, we are already halfway there. However, to reach that point, it's also necessary to understand what an <strong>Event Store</strong> is in practice. Event stores are essentially <a target="_blank" href="https://event-driven.io/en/event_stores_are_key_value_stores/">key-value databases</a> with <a target="_blank" href="https://github.com/ylorph/RandomThoughts/blob/master/2019.08.09_expectations_for_an_event_store.md">features</a> such as atomic writes, optimistic concurrency or idempotency. The key usually consists of a name and an ID, while the value is an ordered list of events. This key-value pairing is commonly referred to as a <strong>stream</strong> aka <strong>history</strong>.</p>
<h2 id="heading-streams-life-cycle">Stream's life-cycle</h2>
<p>Keep it <strong>short</strong>! Which means short enough to be able to read the entire stream without any performance issues. To achieve this, prior <strong>modeling</strong> is crucial. <a target="_blank" href="https://eventmodeling.org/">Event Modeling</a> is an excellent method to discover and describe a system using an example of how information has changed within them over time. During the modeling we can define <strong>boundaries</strong> and <strong>independent streams</strong> therefore how to keep them short. It's always less expensive to refine or redo a modeling rather than realizing it was a mistake after the implementation or the release.</p>
<p>If this is not possible, my two cents is to avoid Event Sourcing altogether! It's worthless. Big streams are typically anti-patterns and are disguised as CRUD operations, cumulative data or <a target="_blank" href="https://thinkbeforecoding.com/post/2013/07/28/Event-Sourcing-vs-Command-Sourcing">command sourcing</a>.</p>
<p>To clarify this by giving an example, auditing is one of the outcomes of Event Sourcing, but we don't do Event Sourcing solely for the purpose of auditing. So it shouldn't be used for updating entities info. There are other suitable patterns for this.</p>
<p><strong>Snapshots</strong> technique is too complex as well and must be at the very bottom of the decision tree.</p>
<h2 id="heading-state-vs-projections">State VS Projections</h2>
<p>The final boss! Which is the confusion between the state and projections. if we aren't familiar with earlier definitions, the failure might occur swiftly. But this one can heart slowly. It's hidden behind many tricks and fixes. However it's an easy boss to fight because it's again a matter of definition.</p>
<h3 id="heading-state">State</h3>
<p>The state is <strong>private</strong>. It’s built only <strong>in-memory</strong> each time we load the history. That’s all! As said before It's used to enforce constraints on new events. Jérémie Chassaing's <a target="_blank" href="https://thinkbeforecoding.com/post/2021/12/17/functional-event-sourcing-decider">explanation</a> is quite straightforward:</p>
<ol>
<li><p>Reduce the history by <strong>evolving</strong> each event <code>state -&gt; event -&gt; state</code>, to get the current state.</p>
</li>
<li><p><strong>Decide</strong> <code>command -&gt; current state -&gt; events</code>, to get the command's behavior as new facts.</p>
</li>
<li><p><strong>Append</strong> new events to the history.</p>
</li>
</ol>
<h3 id="heading-projections">Projections</h3>
<p>On the other hand, projections are <strong>public</strong> and it's very important to point out that there could be many of them. Every time new events are appended to history, projections are triggered right after. Like the state the history is replayed, but this time, it's in order to know how to update the read model.</p>
<p>What's interesting with projections and so read models, is that what is projected is concise and simple and doesn't contain all the internal state and not necessary data. They are optimized for the view and it's cheap to have many of them. No need to have a complex relational databases.</p>
<p>Common misuses are to project the state, to project from one event and not from the history, to not optimize the read models for views, or to make a canonical or monolithic model.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Event Sourcing is a straightforward pattern to both use and understand. However, it's crucial to grasp the definitions, regardless of implementation details.</p>
]]></content:encoded></item><item><title><![CDATA[From Wrist Pain to Productivity]]></title><description><![CDATA[For a long time, my hardware setup consisted of a standard keyboard and mouse, and I’ve also gotten used to working with Windows. This was either because it was required for my job or because of my work with DotNet development. In another hand, at ho...]]></description><link>https://akhansari.tech/from-wrist-pain-to-productivity</link><guid isPermaLink="true">https://akhansari.tech/from-wrist-pain-to-productivity</guid><category><![CDATA[Linux]]></category><category><![CDATA[vim]]></category><category><![CDATA[ergonomic keyboards]]></category><category><![CDATA[rsi]]></category><dc:creator><![CDATA[Amin Khansari]]></dc:creator><pubDate>Fri, 15 Nov 2024 18:55:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/DFlHpnx2LBo/upload/a963aa91c72e3d0bab8876d503cdc2ab.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For a long time, my hardware setup consisted of a standard keyboard and mouse, and I’ve also gotten used to working with Windows. This was either because it was required for my job or because of my work with DotNet development. In another hand, at home, I usually used a Linux desktop. I was happy with both worlds and never questioned. However, due to remote work over the past few years, I no longer have a PC at home and just use an Android tablet when needed.</p>
<p>At some point, I started experiencing soreness in my wrist, to the extend that I had sometimes trouble sleeping. My first step was a quick Google search, which led me to buy a vertical mouse. It helped ease the pain but it was still here. So this time I turned to message boards and YouTube feedback and after some hesitation, I finally decided to take the plunge and to buy a <a target="_blank" href="http://typematrix.com">TypeMatrix</a> keyboard.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mg68hgvz99ec37n9y3i6.png" alt="TypeMatrix" /></p>
<p>Choosing an ortholinear keyboard was one of my best decisions. It features a unique design where the keys are arranged in a grid-like pattern, with the rows and columns aligned vertically and horizontally. This arrangement offers two main advantages:</p>
<ul>
<li><p>It reduces the finger travel and improves hand positioning.</p>
</li>
<li><p>It provides a minimalist aesthetic and a more compact design.</p>
</li>
</ul>
<p>As a result, the wrist experiences less stress and, with practice, doesn’t move at all; only the fingers make slight vertical movements.</p>
<p>The other thing I also told myself, was to switch to ortholinear and <a target="_blank" href="https://fr.wikipedia.org/wiki/B%C3%A9po">bépo</a> (french version of Colemak) layout at the same time. That way, I’d only have to make the effort of learning once, especially since it further reduces finger movement. Why stick with an outdated, illogical layout anyway?</p>
<p>As you can see the middle line is the most used:</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vkq36dpf1nrkivybmpq4.png" alt="Layout heatmaps" /></p>
<p>Above all, what surprised me the most was my ability to effortlessly switch between ortholinear bépo and staggered qwerty keyboards. My theory is that the two layouts are so distinct that the brain instinctively adapts to the switch.</p>
<p>Unfortunately, even with all these efforts, the wrist pain returned slightly after three years. I realized I needed to do more, but at least I understood the underlying issues:</p>
<ul>
<li><p>My hand traveled too much between the keyboard and the mouse.</p>
</li>
<li><p>The Shift, Control, and Alt combinations for shortcuts felt unnatural and illogical.</p>
</li>
<li><p>The TypeMatrix keyboard forced me to keep my hands too close together which is not a natural position.</p>
</li>
</ul>
<p>It was time to buy a split keyboard aaaand to switch to NeoVim by the way!</p>
<p><img src="https://dygma.com/cdn/shop/files/Small_Split_51411d7e-7d07-4f56-97e5-9f3408158e67_1500x.jpg?v=1690374691" alt="Dygma Defy" /></p>
<p>What a joy! It took me some time to find a layout that suited my habits, but I eventually settled on a custom Bépo layout with two layers. I still tweak it occasionally to optimize further. You can find the layout at <a target="_blank" href="https://configure.zsa.io/moonlander/layouts/yB3rJ/latest/0">ZSA website</a>.</p>
<blockquote>
<p>With split ortholinear keyboards and modern layouts, finger and wrist movements are minimized, and there's no longer a need to look at the keyboard.</p>
</blockquote>
<p>Here is my heat-map (of my old Moonlander) after a few minutes of coding:</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oug2swn9g5rhst0redyz.png" alt="Image description" /></p>
<p>Now it was time to learn NeoVim. Interestingly enough, I think I spent far more time configuring Vim bindings than actually learning Vim itself. In reality, Vim is surprisingly easy to pick up, especially with the help of the <em>Which-Key</em> plugin already included in the <em>LazyVim</em> distribution. Everything about it feels natural and logical, particularly the Vim Motions. However, configuring bindings is an entirely different story, it's such a tedious process!</p>
<blockquote>
<p>NeoVim allows us to stay fully focused on code</p>
</blockquote>
<p>Since I use a completely different layout than QWERTY, I never found the classic <em>h, j, k, l</em> keys convenient. In the past, this was one of the main reasons I hesitated to switch to Vim (along with the lack of LSPs). This time, as a gamer, I decided to take the opportunity to configure classic gaming-style arrow keys for the left hand, and as the result, to reconfigure <em>u, i, p, e</em>. You can find my dotfiles on my GitHub.</p>
<p>After several months of using NeoVim on Windows, I began to feel increasingly limited. The main reason is that the user experience in Windows is vertical and dictated by Microsoft's vision and roadmap, whereas Linux provides a horizontal experience shaped by users themselves, thanks to tools and contributions from the community. The second reason is that Windows is heavily GUI-centric, while Linux is CLI/TUI-centric, which is far better suited for developers, especially with the rise of the new Rust CLI tools ecosystem.</p>
<blockquote>
<p>Linux is distraction-free by default</p>
</blockquote>
<p>I did some research and ultimately chose <a target="_blank" href="https://system76.com/cosmic">Cosmic Desktop</a>. It’s the only desktop I found that has a great tiling window manager without hassle. <a target="_blank" href="https://fedoraproject.org/atomic-desktops/cosmic/">Fedora Cosmic Atomic</a> is the perfect match as Linux distribution. Immutable systems are secure and require low maintenance.</p>
<p><a target="_blank" href="https://vimium.github.io">Vimium</a> also lets me browse without needing the mouse at all.<br />Finally the only application that still forces me to use the mouse is… Slack!</p>
<p>I usually launch <a target="_blank" href="https://wezterm.org/">Wezterm</a> or <a target="_blank" href="https://ghostty.org/">Ghostty</a> in full screen and without borders, and start coding or doing other tasks. I would love to never have to leave my terminal and start to click-click-click...</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>I no longer have wrist pain, and in the process, I’ve gained in productivity.<br />What more could anyone ask for!</p>
<p><img src="https://static.wixstatic.com/media/59378b_9a0ba0ecff9143b08bb1e4638dac7ec5~mv2.jpg/v1/fill/w_980,h_735,al_c,q_85,usm_0.66_1.00_0.01,enc_auto/59378b_9a0ba0ecff9143b08bb1e4638dac7ec5~mv2.jpg" alt /></p>
]]></content:encoded></item></channel></rss>