Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

Building CLI apps with TypeScript in 2026

Uncategorized
1 1 10
  • We've all been there. You start a quick TypeScript CLI with process.argv.slice(2), add a couple of options, and before you know it you're drowning in if/else blocks and parseInt calls. It works, until it doesn't.

    In this guide, we'll move from manual argument parsing to a fully type-safe CLI with subcommands, mutually exclusive options, and shell completion.

    The naïve approach: parsing process.argv

    Let's start with the most basic approach. Say we want a greeting program that takes a name and optionally repeats the greeting:

    // greet.ts
    const args = process.argv.slice(2);
    
    let name: string | undefined;
    let count = 1;
    
    for (let i = 0; i < args.length; i++) {
      if (args[i] === "--name" || args[i] === "-n") {
        name = args[++i];
      } else if (args[i] === "--count" || args[i] === "-c") {
        count = parseInt(args[++i], 10);
      }
    }
    
    if (!name) {
      console.error("Error: --name is required");
      process.exit(1);
    }
    
    for (let i = 0; i < count; i++) {
      console.log(`Hello, ${name}!`);
    }
    

    Run node greet.js --name Alice --count 3 and you'll get three greetings.

    But this approach is fragile. count could be NaN if someone passes --count foo, and we'd silently proceed. There's no help text. If someone passes --name without a value, we'd read the next option as the name. And the boilerplate grows fast with each new option.

    The traditional libraries

    You've probably heard of Commander.js and Yargs. They've been around for years and solve the basic problems:

    // With Commander.js
    import { program } from "commander";
    
    program
      .requiredOption("-n, --name <n>", "Name to greet")
      .option("-c, --count <number>", "Number of times to greet", "1")
      .parse();
    
    const opts = program.opts();
    

    These libraries handle help text, option parsing, and basic validation. But they were designed before TypeScript became mainstream, and the type safety is bolted on rather than built in.

    The real problem shows up when you need mutually exclusive options. Say your CLI works either in "server mode" (with --port and --host) or "client mode" (with --url). With these libraries, you end up with a config object where all options are potentially present, and you're left writing runtime checks to ensure the user didn't mix incompatible flags. TypeScript can't help you because the types don't reflect the actual constraints.

    Enter Optique

    Optique takes a different approach. Instead of configuring options declaratively, you build parsers by composing smaller parsers together. The types flow naturally from this composition, so TypeScript always knows exactly what shape your parsed result will have.

    Optique works across JavaScript runtimes: Node.js, Deno, and Bun are all supported. The core parsing logic has no runtime-specific dependencies, so you can even use it in browsers if you need to parse CLI-like arguments in a web context.

    Let's rebuild our greeting program:

    import { object } from "@optique/core/constructs";
    import { option } from "@optique/core/primitives";
    import { integer, string } from "@optique/core/valueparser";
    import { withDefault } from "@optique/core/modifiers";
    import { run } from "@optique/run";
    
    const parser = object({
      name: option("-n", "--name", string()),
      count: withDefault(option("-c", "--count", integer({ min: 1 })), 1),
    });
    
    const config = run(parser);
    // config is typed as { name: string; count: number }
    
    for (let i = 0; i < config.count; i++) {
      console.log(`Hello, ${config.name}!`);
    }
    

    Types are inferred automatically. config.name is string, not string | undefined. config.count is number, guaranteed to be at least 1. Validation is built in: integer({ min: 1 }) rejects non-integers and values below 1 with clear error messages. Help text is generated automatically, and the run() function handles errors and exits with appropriate codes.

    Install it with your package manager of choice:

    npm add @optique/core @optique/run
    # or: pnpm add, yarn add, bun add, deno add jsr:@optique/core jsr:@optique/run
    

    Building up: a file converter

    Let's build something more realistic: a file converter that reads from an input file, converts to a specified format, and writes to an output file.

    import { object } from "@optique/core/constructs";
    import { optional, withDefault } from "@optique/core/modifiers";
    import { argument, option } from "@optique/core/primitives";
    import { choice, string } from "@optique/core/valueparser";
    import { run } from "@optique/run";
    
    const parser = object({
      input: argument(string({ metavar: "INPUT" })),
      output: option("-o", "--output", string({ metavar: "FILE" })),
      format: withDefault(
        option("-f", "--format", choice(["json", "yaml", "toml"])),
        "json"
      ),
      pretty: option("-p", "--pretty"),
      verbose: option("-v", "--verbose"),
    });
    
    const config = run(parser, {
      help: "both",
      version: { mode: "both", value: "1.0.0" },
    });
    
    // config.input: string
    // config.output: string
    // config.format: "json" | "yaml" | "toml"
    // config.pretty: boolean
    // config.verbose: boolean
    

    The type of config.format isn't just string. It's the union "json" | "yaml" | "toml". TypeScript will catch typos like config.format === "josn" at compile time.

    The choice() parser is useful for any option with a fixed set of valid values: log levels, output formats, environment names, and so on. You get both runtime validation (invalid values are rejected with helpful error messages) and compile-time checking (TypeScript knows the exact set of possible values).

    Mutually exclusive options

    Now let's tackle the case that trips up most CLI libraries: mutually exclusive options. Say our tool can either run as a server or connect as a client, but not both:

    import { object, or } from "@optique/core/constructs";
    import { withDefault } from "@optique/core/modifiers";
    import { argument, constant, option } from "@optique/core/primitives";
    import { integer, string, url } from "@optique/core/valueparser";
    import { run } from "@optique/run";
    
    const parser = or(
      // Server mode
      object({
        mode: constant("server"),
        port: option("-p", "--port", integer({ min: 1, max: 65535 })),
        host: withDefault(option("-h", "--host", string()), "0.0.0.0"),
      }),
      // Client mode
      object({
        mode: constant("client"),
        url: argument(url()),
      }),
    );
    
    const config = run(parser);
    

    The or() combinator tries each alternative in order. The first one that successfully parses wins. The constant() parser adds a literal value to the result without consuming any input, which serves as a discriminator.

    TypeScript infers a discriminated union:

    type Config =
      | { mode: "server"; port: number; host: string }
      | { mode: "client"; url: URL };
    

    Now you can write type-safe code that handles each mode:

    if (config.mode === "server") {
      console.log(`Starting server on ${config.host}:${config.port}`);
    } else {
      console.log(`Connecting to ${config.url.hostname}`);
    }
    

    Try accessing config.url in the server branch. TypeScript won't let you. The compiler knows that when mode is "server", only port and host exist.

    This is the key difference from configuration-based libraries. With Commander or Yargs, you'd get a type like { port?: number; host?: string; url?: string } and have to check at runtime which combination of fields is actually present. With Optique, the types match the actual constraints of your CLI.

    Subcommands

    For larger tools, you'll want subcommands. Optique handles this with the command() parser:

    import { object, or } from "@optique/core/constructs";
    import { optional } from "@optique/core/modifiers";
    import { argument, command, constant, option } from "@optique/core/primitives";
    import { string } from "@optique/core/valueparser";
    import { run } from "@optique/run";
    
    const parser = or(
      command("add", object({
        action: constant("add"),
        key: argument(string({ metavar: "KEY" })),
        value: argument(string({ metavar: "VALUE" })),
      })),
      command("remove", object({
        action: constant("remove"),
        key: argument(string({ metavar: "KEY" })),
      })),
      command("list", object({
        action: constant("list"),
        pattern: optional(option("-p", "--pattern", string())),
      })),
    );
    
    const result = run(parser, { help: "both" });
    
    switch (result.action) {
      case "add":
        console.log(`Adding ${result.key}=${result.value}`);
        break;
      case "remove":
        console.log(`Removing ${result.key}`);
        break;
      case "list":
        console.log(`Listing${result.pattern ? ` (filter: ${result.pattern})` : ""}`);
        break;
    }
    

    Each subcommand gets its own help text. Run myapp add --help and you'll see only the options relevant to add. Run myapp --help and you'll see a summary of all available commands.

    The pattern here is the same as mutually exclusive options: or() to combine alternatives, constant() to add a discriminator. This consistency is one of Optique's strengths. Once you understand the basic combinators, you can build arbitrarily complex CLI structures by composing them.

    Shell completion

    Optique has built-in shell completion for Bash, zsh, fish, PowerShell, and Nushell. Enable it by passing completion: "both" to run():

    const config = run(parser, {
      help: "both",
      version: { mode: "both", value: "1.0.0" },
      completion: "both",
    });
    

    Users can then generate completion scripts:

    $ myapp --completion bash >> ~/.bashrc
    $ myapp --completion zsh >> ~/.zshrc
    $ myapp --completion fish > ~/.config/fish/completions/myapp.fish
    

    The completions are context-aware. They know about your subcommands, option values, and choice() alternatives. Type myapp --format <TAB> and you'll see json, yaml, toml as suggestions. Type myapp a<TAB> and it'll complete to myapp add.

    Completion support is often an afterthought in CLI tools, but it makes a real difference in user experience. With Optique, you get it essentially for free.

    Integrating with validation libraries

    Already using Zod for validation in your project? The @optique/zod package lets you reuse those schemas as CLI value parsers:

    import { z } from "zod";
    import { zod } from "@optique/zod";
    import { option } from "@optique/core/primitives";
    
    const email = option("--email", zod(z.string().email()));
    const port = option("--port", zod(z.coerce.number().int().min(1).max(65535)));
    

    Your existing validation logic just works. The Zod error messages are passed through to the user, so you get the same helpful feedback you're used to.

    Prefer Valibot? The @optique/valibot package works the same way:

    import * as v from "valibot";
    import { valibot } from "@optique/valibot";
    import { option } from "@optique/core/primitives";
    
    const email = option("--email", valibot(v.pipe(v.string(), v.email())));
    

    Valibot's bundle size is significantly smaller than Zod's (~10KB vs ~52KB), which can matter for CLI tools where startup time is noticeable.

    Tips

    A few things I've learned building CLIs with Optique:

    Start simple. Begin with object() and basic options. Add or() for mutually exclusive groups only when you need them. It's easy to over-engineer CLI parsers.

    Use descriptive metavars. Instead of string(), write string({ metavar: "FILE" }) or string({ metavar: "URL" }). The metavar appears in help text and error messages, so it's worth the extra few characters.

    Leverage withDefault(). It's better than making options optional and checking for undefined everywhere. Your code becomes cleaner when you can assume values are always present.

    Test your parser. Optique's core parsing functions work without process.argv, so you can unit test your parser logic:

    import { parse } from "@optique/core/parser";
    
    const result = parse(parser, ["--name", "Alice", "--count", "3"]);
    if (result.success) {
      assert.equal(result.value.name, "Alice");
      assert.equal(result.value.count, 3);
    }
    

    This is especially valuable for complex parsers with many edge cases.

    Going further

    We've covered the fundamentals, but Optique has more to offer:

    • Async value parsers for validating against external sources, like checking if a Git branch exists or if a URL is reachable
    • Path validation with path() for checking file existence, directory structure, and file extensions
    • Custom value parsers for domain-specific types (though Zod/Valibot integration is usually easier)
    • Reusable option groups with merge() for sharing common options across subcommands
    • The @optique/temporal package for parsing dates and times using the Temporal API

    Check out the documentation for the full picture. The tutorial walks through the concepts in more depth, and the cookbook has patterns for common scenarios.

    That's it

    Building CLIs in TypeScript doesn't have to mean fighting with types or writing endless runtime validation. Optique lets you express constraints in a way that TypeScript actually understands, so the compiler catches mistakes before they reach production.

    The source is on GitHub, and packages are available on both npm and JSR.


    Questions or feedback? Find me on the fediverse or open an issue on the GitHub repo.

  • hongminhee@hollo.socialundefined hongminhee@hollo.social shared this topic on

Gli ultimi otto messaggi ricevuti dalla Federazione
Post suggeriti
  • 0 Votes
    4 Posts
    19 Views
    @celesteh@hachyderm.io @jnkrtech@treehouse.systems I think it's partly because JavaScript is more commonly used for building consumer products compared to other languages, so there's a higher proportion of developers focused on shipping products rather than diving deep into infrastructure. When you're building a product, you naturally extract just enough to solve your immediate problem and move on. Languages like Rust tend to attract more developers interested in tooling and infrastructure work, where yak shaving is almost expected. The ecosystem reflects the priorities of its community.
  • 0 Votes
    1 Posts
    5 Views
    When I started building Fedify, an ActivityPub server framework, I ran into a problem that surprised me: I couldn't figure out how to add logging. Not because logging is hard—there are dozens of mature logging libraries for JavaScript. The problem was that they're primarily designed for applications, not for libraries that want to stay unobtrusive. I wrote about this a few months ago, and the response was modest—some interest, some skepticism, and quite a bit of debate about whether the post was AI-generated. I'll be honest: English isn't my first language, so I use LLMs to polish my writing. But the ideas and technical content are mine. Several readers wanted to see a real-world example rather than theory. The problem: existing loggers assume you're building an app Fedify helps developers build federated social applications using the ActivityPub protocol. If you've ever worked with federation, you know debugging can be painful. When an activity fails to deliver, you need to answer questions like: Did the HTTP request actually go out? Was the signature generated correctly? Did the remote server reject it? Why? Was there a problem parsing the response? These questions span multiple subsystems: HTTP handling, cryptographic signatures, JSON-LD processing, queue management, and more. Without good logging, debugging turns into guesswork. But here's the dilemma I faced as a library author: if I add verbose logging to help with debugging, I risk annoying users who don't want their console cluttered with Fedify's internal chatter. If I stay silent, users struggle to diagnose issues. I looked at the existing options. With winston or Pino, I would have to either: Configure a logger inside Fedify (imposing my choices on users), or Ask users to pass a logger instance to Fedify (adding boilerplate) There's also debug, which is designed for this use case. But it doesn't give you structured, level-based logs that ops teams expect—and it relies on environment variables, which some runtimes like Deno restrict by default for security reasons. None of these felt right. So I built LogTape—a logging library designed from the ground up for library authors. And Fedify became its first real user. The solution: hierarchical categories with zero default output The key insight was simple: a library should be able to log without producing any output unless the application developer explicitly enables it. Fedify uses LogTape's hierarchical category system to give users fine-grained control over what they see. Here's how the categories are organized: Category What it logs ["fedify"] Everything from the library ["fedify", "federation", "inbox"] Incoming activities ["fedify", "federation", "outbox"] Outgoing activities ["fedify", "federation", "http"] HTTP requests and responses ["fedify", "sig", "http"] HTTP Signature operations ["fedify", "sig", "ld"] Linked Data Signature operations ["fedify", "sig", "key"] Key generation and retrieval ["fedify", "runtime", "docloader"] JSON-LD document loading ["fedify", "webfinger", "lookup"] WebFinger resource lookups …and about a dozen more. Each category corresponds to a distinct subsystem. This means a user can configure logging like this: await configure({ sinks: { console: getConsoleSink() }, loggers: [ // Show errors from all of Fedify { category: "fedify", sinks: ["console"], lowestLevel: "error" }, // But show debug info for inbox processing specifically { category: ["fedify", "federation", "inbox"], sinks: ["console"], lowestLevel: "debug" }, ], }); When something goes wrong with incoming activities, they get detailed logs for that subsystem while keeping everything else quiet. No code changes required—just configuration. Request tracing with implicit contexts The hierarchical categories solved the filtering problem, but there was another challenge: correlating logs across async boundaries. In a federated system, a single user action might trigger a cascade of operations: fetch a remote actor, verify their signature, process the activity, fan out to followers, and so on. When something fails, you need to correlate all the log entries for that specific request. Fedify uses LogTape's implicit context feature to automatically tag every log entry with a requestId: await configure({ sinks: { file: getFileSink("fedify.jsonl", { formatter: jsonLinesFormatter }) }, loggers: [ { category: "fedify", sinks: ["file"], lowestLevel: "info" }, ], contextLocalStorage: new AsyncLocalStorage(), // Enables implicit contexts }); With this configuration, every log entry automatically includes a requestId property. When you need to debug a specific request, you can filter your logs: jq 'select(.properties.requestId == "abc-123")' fedify.jsonl And you'll see every log entry from that request—across all subsystems, all in order. No manual correlation needed. The requestId is derived from standard headers when available (X-Request-Id, Traceparent, etc.), so it integrates naturally with existing observability infrastructure. What users actually see So what does all this configuration actually mean for someone using Fedify? If a Fedify user doesn't configure LogTape at all, they see nothing. No warnings about missing configuration, no default output, and minimal performance overhead—the logging calls are essentially no-ops. For basic visibility, they can enable error-level logging for all of Fedify with three lines of configuration. When debugging a specific issue, they can enable debug-level logging for just the relevant subsystem. And if they're running in production with serious observability requirements, they can pipe structured JSON logs to their monitoring system with request correlation built in. The same library code supports all these scenarios—whether the user is running on Node.js, Deno, Bun, or edge functions, without extra polyfills or shims. The user decides what they need. Lessons learned Building Fedify with LogTape taught me a few things: Design your categories early. The hierarchical structure should reflect how users will actually want to filter logs. I organized Fedify's categories around subsystems that users might need to debug independently. Use structured logging. Properties like requestId, activityId, and actorId are far more useful than string interpolation when you need to analyze logs programmatically. Implicit contexts turned out to be more useful than I expected. Being able to correlate logs across async boundaries without passing context manually made debugging distributed operations much easier. When a user reports that activity delivery failed, I can give them a single jq command to extract everything relevant. Trust your users. Some library authors worry about exposing too much internal detail through logs. I've found the opposite—users appreciate being able to see what's happening when they need to. The key is making it opt-in. Try it yourself If you're building a library and struggling with the logging question—how much to log, how to give users control, how to avoid being noisy—I'd encourage you to look at how Fedify does it. The Fedify logging documentation explains everything in detail. And if you want to understand the philosophy behind LogTape's design, my earlier post covers that. LogTape isn't trying to replace winston or Pino for application developers who are happy with those tools. It fills a different gap: logging for libraries that want to stay out of the way until users need them. If that's what you're looking for, it might be a better fit than the usual app-centric loggers.
  • 0 Votes
    1 Posts
    5 Views
    CLIパーサーの新しい記事を書きました。--reporterの値によって--output-fileが必須になったり禁止になったり…そういう関係、型で表現できたら楽じゃないですか? https://zenn.dev/hongminhee/articles/201ca6d2e57764 #TypeScript #CLI #Optique
  • 0 Votes
    3 Posts
    8 Views
    BotKitは、ActivityPubボットを作るためのTypeScriptフレームワークです。既存のMastodon/Misskeyボットとの違いは、ボット自体が独立したサーバーとして動作すること。プラットフォームのアカウントは不要です。 文字数制限もなければ、APIレート制限に悩まされることもありません。 bot.onMention = async (session, message) => { await message.reply(text`こんにちは、${message.actor}さん!`); }; フェデレーション、HTTP Signatures、配送キューといったActivityPub周りの処理はFedifyがすべて引き受けます。ボットのロジックを書くだけです。 DenoでもNode.jsでも動きます。 https://botkit.fedify.dev/ #BotKit #Fedify #ActivityPub #TypeScript #Deno #NodeJS