Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

Your CLI's completion should know what options you've already typed

  • Consider Git's -C option:

    git -C /path/to/repo checkout <TAB>
    

    When you hit <kbd>Tab</kbd>, Git completes branch names from /path/to/repo, not your
    current directory. The completion is context-aware—it depends on the value of
    another option.

    Most CLI parsers can't do this. They treat each option in isolation, so
    completion for --branch has no way of knowing the --repo value. You end up
    with two unpleasant choices: either show completions for all possible
    branches across all repositories (useless), or give up on completion entirely
    for these options.

    Optique 0.10.0 introduces a dependency system that solves this problem while
    preserving full type safety.

    Static dependencies with or()

    Optique already handles certain kinds of dependent options via the or()
    combinator:

    import { flag, object, option, or, string } from "@optique/core";
    
    const outputOptions = or(
      object({
        json: flag("--json"),
        pretty: flag("--pretty"),
      }),
      object({
        csv: flag("--csv"),
        delimiter: option("--delimiter", string()),
      }),
    );
    

    TypeScript knows that if json is true, you'll have a pretty field, and if
    csv is true, you'll have a delimiter field. The parser enforces this at
    runtime, and shell completion will suggest --pretty only when --json is
    present.

    This works well when the valid combinations are known at definition time. But
    it can't handle cases where valid values depend on runtime input—like
    branch names that vary by repository.

    Runtime dependencies

    Common scenarios include:

    • A deployment CLI where --environment affects which services are available
    • A database tool where --connection affects which tables can be completed
    • A cloud CLI where --project affects which resources are shown

    In each case, you can't know the valid values until you know what the user
    typed for the dependency option. Optique 0.10.0 introduces dependency() and
    derive() to handle exactly this.

    The dependency system

    The core idea is simple: mark one option as a dependency source, then create
    derived parsers that use its value.

    import {
      choice,
      dependency,
      message,
      object,
      option,
      string,
    } from "@optique/core";
    
    function getRefsFromRepo(repoPath: string): string[] {
      // In real code, this would read from the Git repository
      return ["main", "develop", "feature/login"];
    }
    
    // Mark as a dependency source
    const repoParser = dependency(string());
    
    // Create a derived parser
    const refParser = repoParser.derive({
      metavar: "REF",
      factory: (repoPath) => {
        const refs = getRefsFromRepo(repoPath);
        return choice(refs);
      },
      defaultValue: () => ".",
    });
    
    const parser = object({
      repo: option("--repo", repoParser, {
        description: message`Path to the repository`,
      }),
      ref: option("--ref", refParser, {
        description: message`Git reference`,
      }),
    });
    

    The factory function is where the dependency gets resolved. It receives the
    actual value the user provided for --repo and returns a parser that validates
    against refs from that specific repository.

    Under the hood, Optique uses a three-phase parsing strategy:

    1. Parse all options in a first pass, collecting dependency values
    2. Call factory functions with the collected values to create concrete parsers
    3. Re-parse derived options using those dynamically created parsers

    This means both validation and completion work correctly—if the user has
    already typed --repo /some/path, the --ref completion will show refs from
    that path.

    Repository-aware completion with @optique/git

    The @optique/git package provides async value parsers that read from Git
    repositories. Combined with the dependency system, you can build CLIs with
    repository-aware completion:

    import {
      command,
      dependency,
      message,
      object,
      option,
      string,
    } from "@optique/core";
    import { gitBranch } from "@optique/git";
    
    const repoParser = dependency(string());
    
    const branchParser = repoParser.deriveAsync({
      metavar: "BRANCH",
      factory: (repoPath) => gitBranch({ dir: repoPath }),
      defaultValue: () => ".",
    });
    
    const checkout = command(
      "checkout",
      object({
        repo: option("--repo", repoParser, {
          description: message`Path to the repository`,
        }),
        branch: option("--branch", branchParser, {
          description: message`Branch to checkout`,
        }),
      }),
    );
    

    Now when you type my-cli checkout --repo /path/to/project --branch <TAB>, the
    completion will show branches from /path/to/project. The defaultValue of
    "." means that if --repo isn't specified, it falls back to the current
    directory.

    Multiple dependencies

    Sometimes a parser needs values from multiple options. The deriveFrom()
    function handles this:

    import {
      choice,
      dependency,
      deriveFrom,
      message,
      object,
      option,
    } from "@optique/core";
    
    function getAvailableServices(env: string, region: string): string[] {
      return [`${env}-api-${region}`, `${env}-web-${region}`];
    }
    
    const envParser = dependency(choice(["dev", "staging", "prod"] as const));
    const regionParser = dependency(choice(["us-east", "eu-west"] as const));
    
    const serviceParser = deriveFrom({
      dependencies: [envParser, regionParser] as const,
      metavar: "SERVICE",
      factory: (env, region) => {
        const services = getAvailableServices(env, region);
        return choice(services);
      },
      defaultValues: () => ["dev", "us-east"] as const,
    });
    
    const parser = object({
      env: option("--env", envParser, {
        description: message`Deployment environment`,
      }),
      region: option("--region", regionParser, {
        description: message`Cloud region`,
      }),
      service: option("--service", serviceParser, {
        description: message`Service to deploy`,
      }),
    });
    

    The factory receives values in the same order as the dependency array. If
    some dependencies aren't provided, Optique uses the defaultValues.

    Async support

    Real-world dependency resolution often involves I/O—reading from Git
    repositories, querying APIs, accessing databases. Optique provides async
    variants for these cases:

    import { dependency, string } from "@optique/core";
    import { gitBranch } from "@optique/git";
    
    const repoParser = dependency(string());
    
    const branchParser = repoParser.deriveAsync({
      metavar: "BRANCH",
      factory: (repoPath) => gitBranch({ dir: repoPath }),
      defaultValue: () => ".",
    });
    

    The @optique/git package uses isomorphic-git under the hood, so
    gitBranch(), gitTag(), and gitRef() all work in both Node.js and Deno.

    There's also deriveSync() for when you need to be explicit about synchronous
    behavior, and deriveFromAsync() for multiple async dependencies.

    Wrapping up

    The dependency system lets you build CLIs where options are aware of each
    other—not just for validation, but for shell completion too. You get type
    safety throughout: TypeScript knows the relationship between your dependency
    sources and derived parsers, and invalid combinations are caught at compile
    time.

    This is particularly useful for tools that interact with external systems where
    the set of valid values isn't known until runtime. Git repositories, cloud
    providers, databases, container registries—anywhere the completion choices
    depend on context the user has already provided.

    This feature will be available in Optique 0.10.0. To try the pre-release:

    deno add jsr:@optique/core@0.10.0-dev.311
    

    Or with npm:

    npm install @optique/core@0.10.0-dev.311
    

    See the documentation for more details.

  • hongminhee@hollo.socialundefined hongminhee@hollo.social shared this topic

Gli ultimi otto messaggi ricevuti dalla Federazione
Post suggeriti
  • 0 Votes
    1 Posts
    5 Views
    If you've built CLI tools, you've written code like this: if (opts.reporter === "junit" && !opts.outputFile) { throw new Error("--output-file is required for junit reporter"); } if (opts.reporter === "html" && !opts.outputFile) { throw new Error("--output-file is required for html reporter"); } if (opts.reporter === "console" && opts.outputFile) { console.warn("--output-file is ignored for console reporter"); } A few months ago, I wrote Stop writing CLI validation. Parse it right the first time. about parsing individual option values correctly. But it didn't cover the relationships between options. In the code above, --output-file only makes sense when --reporter is junit or html. When it's console, the option shouldn't exist at all. We're using TypeScript. We have a powerful type system. And yet, here we are, writing runtime checks that the compiler can't help with. Every time we add a new reporter type, we need to remember to update these checks. Every time we refactor, we hope we didn't miss one. The state of TypeScript CLI parsers The old guard—Commander, yargs, minimist—were built before TypeScript became mainstream. They give you bags of strings and leave type safety as an exercise for the reader. But we've made progress. Modern TypeScript-first libraries like cmd-ts and Clipanion (the library powering Yarn Berry) take types seriously: // cmd-ts const app = command({ args: { reporter: option({ type: string, long: 'reporter' }), outputFile: option({ type: string, long: 'output-file' }), }, handler: (args) => { // args.reporter: string // args.outputFile: string }, }); // Clipanion class TestCommand extends Command { reporter = Option.String('--reporter'); outputFile = Option.String('--output-file'); } These libraries infer types for individual options. --port is a number. --verbose is a boolean. That's real progress. But here's what they can't do: express that --output-file is required when --reporter is junit, and forbidden when --reporter is console. The relationship between options isn't captured in the type system. So you end up writing validation code anyway: handler: (args) => { // Both cmd-ts and Clipanion need this if (args.reporter === "junit" && !args.outputFile) { throw new Error("--output-file required for junit"); } // args.outputFile is still string | undefined // TypeScript doesn't know it's definitely string when reporter is "junit" } Rust's clap and Python's Click have requires and conflicts_with attributes, but those are runtime checks too. They don't change the result type. If the parser configuration knows about option relationships, why doesn't that knowledge show up in the result type? Modeling relationships with conditional() Optique treats option relationships as a first-class concept. Here's the test reporter scenario: import { conditional, object } from "@optique/core/constructs"; import { option } from "@optique/core/primitives"; import { choice, string } from "@optique/core/valueparser"; import { run } from "@optique/run"; const parser = conditional( option("--reporter", choice(["console", "junit", "html"])), { console: object({}), junit: object({ outputFile: option("--output-file", string()), }), html: object({ outputFile: option("--output-file", string()), openBrowser: option("--open-browser"), }), } ); const [reporter, config] = run(parser); The conditional() combinator takes a discriminator option (--reporter) and a map of branches. Each branch defines what other options are valid for that discriminator value. TypeScript infers the result type automatically: type Result = | ["console", {}] | ["junit", { outputFile: string }] | ["html", { outputFile: string; openBrowser: boolean }]; When reporter is "junit", outputFile is string—not string | undefined. The relationship is encoded in the type. Now your business logic gets real type safety: const [reporter, config] = run(parser); switch (reporter) { case "console": runWithConsoleOutput(); break; case "junit": // TypeScript knows config.outputFile is string writeJUnitReport(config.outputFile); break; case "html": // TypeScript knows config.outputFile and config.openBrowser exist writeHtmlReport(config.outputFile); if (config.openBrowser) openInBrowser(config.outputFile); break; } No validation code. No runtime checks. If you add a new reporter type and forget to handle it in the switch, the compiler tells you. A more complex example: database connections Test reporters are a nice example, but let's try something with more variation. Database connection strings: myapp --db=sqlite --file=./data.db myapp --db=postgres --host=localhost --port=5432 --user=admin myapp --db=mysql --host=localhost --port=3306 --user=root --ssl Each database type needs completely different options: SQLite just needs a file path PostgreSQL needs host, port, user, and optionally password MySQL needs host, port, user, and has an SSL flag Here's how you model this: import { conditional, object } from "@optique/core/constructs"; import { withDefault, optional } from "@optique/core/modifiers"; import { option } from "@optique/core/primitives"; import { choice, string, integer } from "@optique/core/valueparser"; const dbParser = conditional( option("--db", choice(["sqlite", "postgres", "mysql"])), { sqlite: object({ file: option("--file", string()), }), postgres: object({ host: option("--host", string()), port: withDefault(option("--port", integer()), 5432), user: option("--user", string()), password: optional(option("--password", string())), }), mysql: object({ host: option("--host", string()), port: withDefault(option("--port", integer()), 3306), user: option("--user", string()), ssl: option("--ssl"), }), } ); The inferred type: type DbConfig = | ["sqlite", { file: string }] | ["postgres", { host: string; port: number; user: string; password?: string }] | ["mysql", { host: string; port: number; user: string; ssl: boolean }]; Notice the details: PostgreSQL defaults to port 5432, MySQL to 3306. PostgreSQL has an optional password, MySQL has an SSL flag. Each database type has exactly the options it needs—no more, no less. With this structure, writing dbConfig.ssl when the mode is sqlite isn't a runtime error—it's a compile-time impossibility. Try expressing this with requires_if attributes. You can't. The relationships are too rich. The pattern is everywhere Once you see it, you find this pattern in many CLI tools: Authentication modes: const authParser = conditional( option("--auth", choice(["none", "basic", "token", "oauth"])), { none: object({}), basic: object({ username: option("--username", string()), password: option("--password", string()), }), token: object({ token: option("--token", string()), }), oauth: object({ clientId: option("--client-id", string()), clientSecret: option("--client-secret", string()), tokenUrl: option("--token-url", url()), }), } ); Deployment targets, output formats, connection protocols—anywhere you have a mode selector that determines what other options are valid. Why conditional() exists Optique already has an or() combinator for mutually exclusive alternatives. Why do we need conditional()? The or() combinator distinguishes branches based on structure—which options are present. It works well for subcommands like git commit vs git push, where the arguments differ completely. But in the reporter example, the structure is identical: every branch has a --reporter flag. The difference lies in the flag's value, not its presence. // This won't work as intended const parser = or( object({ reporter: option("--reporter", choice(["console"])) }), object({ reporter: option("--reporter", choice(["junit", "html"])), outputFile: option("--output-file", string()) }), ); When you pass --reporter junit, or() tries to pick a branch based on what options are present. Both branches have --reporter, so it can't distinguish them structurally. conditional() solves this by reading the discriminator's value first, then selecting the appropriate branch. It bridges the gap between structural parsing and value-based decisions. The structure is the constraint Instead of parsing options into a loose type and then validating relationships, define a parser whose structure is the constraint. Traditional approach Optique approach Parse → Validate → Use Parse (with constraints) → Use Types and validation logic maintained separately Types reflect the constraints Mismatches found at runtime Mismatches found at compile time The parser definition becomes the single source of truth. Add a new reporter type? The parser definition changes, the inferred type changes, and the compiler shows you everywhere that needs updating. Try it If this resonates with a CLI you're building: Documentation Tutorial conditional() reference GitHub Next time you're about to write an if statement checking option relationships, ask: could the parser express this constraint instead? The structure of your parser is the constraint. You might not need that validation code at all.
  • Working with #JavaScript again.

    Uncategorized javascript
    1
    1
    0 Votes
    1 Posts
    4 Views
    Working with #JavaScript again.
  • 0 Votes
    1 Posts
    11 Views
    Anyone aware of #Javascript jobs at a not evil/impactful company? I’ve got 15 years of experience, fluent in React and NodeJS with Express. Experience with Postgres, SQLite and NoSQL databases. Got my last company Cyber Essentials Certified. Im proficient in dev ops and sys admin. Have a small (but growing!) portfolio of FOSS work.Have an extensive background in teaching programming too, so can mentor juniors. I LOVE mentoring. Have an academic background too. #GetFediHired boosts welcome 💖
  • 0 Votes
    5 Posts
    28 Views
    @eniko it's interesting to see how things shape up differently depending on perspective and use case. To embed the RSS feeds of my Fedi accounts on my website, the JS is just a trivial “call this XSLT on this XML”, and I would actually do without the JS if there was a way to, but I'm limited to the number of entries in the RSS and I don't even think it's possible to get user-specific tags via RSS at all. OTOH, once one has to use the API, JS for everything (like in your case) makes more sense.