Salta al contenuto

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone
  • 0 Votazioni
    1 Post
    0 Visualizzazioni
    You've probably written something like this in Commander.js. import { Command, Option } from "@commander-js/extra-typings"; const program = new Command() .addOption( new Option("--token <token>", "API token").conflicts([ "username", "password", "oauthClientId", "oauthClientSecret", ]), ) .addOption( new Option("--username <username>", "Basic auth username").conflicts([ "token", "oauthClientId", "oauthClientSecret", ]), ) .addOption( new Option("--password <password>", "Basic auth password").conflicts([ "token", "oauthClientId", "oauthClientSecret", ]), ) .addOption( new Option("--oauth-client-id <id>", "OAuth client id").conflicts([ "token", "username", "password", ]), ) .addOption( new Option("--oauth-client-secret <secret>", "OAuth client secret") .conflicts(["token", "username", "password"]), ); program.parse(); const options = program.opts(); It compiles. It runs. Commander.js rejects --token abc --username alice with the conflict error you'd expect. Look at what TypeScript thinks options is, though. { token?: string | undefined; username?: string | undefined; password?: string | undefined; oauthClientId?: string | undefined; oauthClientSecret?: string | undefined; } Five independent optional fields. Nothing in that type says token, basic auth, and OAuth are three separate worlds. The .conflicts() chains are runtime instructions to a validator. They never touch the type. When your code reaches in and uses options, you still have to narrow by hand. Is token set? If not, can I assume username and password are both there? If you get that branching wrong, the compiler has nothing to say about it. if (options.token != null) { useBasicAuth(options.username!); // [!code error] } The gap between the validator knows and the type knows is what pushed me to start building Optique. It was originally a side project for a CLI I was writing, and it's grown into something people use in earnest. A few days ago I tagged 1.0.0. The part that matters most to me is that the same parser structure now covers environment variables, config files, and prompts instead of stopping at argv. I'll assume you've used Commander.js or Yargs before. I don't want to pretend they're bad; they're mature tools with real users. My goal is to show where they stop, and what's on the other side of that line. Runtime checks aren't type-level knowledge The obvious first objection to everything I just said is that Commander.js's .conflicts() isn't new. It's been there for years. Yargs has it too, along with .implies() on both sides. You can declare that --token conflicts with --username, and Yargs will even let you declare that --username implies --password so the two are required together. These aren't missing features. On current versions, Commander.js 14 with @commander-js/extra-typings 14 handles the simple case correctly. Passing --token abc --username alice produces option '--token <token>' cannot be used with option '--username <username>', which is exactly the message the user needs. Commander.js doesn't give you a type that reflects any of this, though. The Option.conflicts() method in extra-typings returns the Option instance unchanged. There's no generic parameter threading through the chain, accumulating which options are mutually exclusive with which others. So .opts() comes back as five optional fields, and if you write the snippet above, nothing stops you. The non-null assertion will be there in production, waiting for the input where the runtime let both through because they're actually compatible, or for the refactor that moves this code somewhere the invariant no longer holds. Commander.js also has no way to mark a group of options as required together. If you pass --username alice and forget --password, Commander.js runs happily; the user gets a half-configured basic auth at best. .implies() exists, but it's about setting values (“if --free-drink is passed, set --drink to small“), not about requiring co-occurrence. Yargs is stranger. It has .conflicts() and .implies(), and .implies() does enforce co-occurrence: --username without --password fails at runtime. But the interaction between the two gets confusing fast. I tried --token abc --username alice with both wired up. What Yargs told the user was: Missing dependent arguments: username -> password That's the .implies() talking. Yargs checks implies before conflicts, so the real issue (token and username are mutually exclusive) stays buried behind an unrelated complaint about a password the user never mentioned. If you add --password too, then you finally get the mutually-exclusive error. For the user on the receiving end, this is the kind of message that makes them file a bug against you. The Yargs result type is worth seeing as well: { [x: string]: unknown; // [!code error] oauthClientId: string | undefined; "oauth-client-id": string | undefined; // [!code warning] // …the other options, each of them in both kebab- and camel-cased forms } The index signature [x: string]: unknown means any typo on a property access silently becomes unknown. I tried parsed.tokenn and TypeScript accepted it; the value came back undefined at runtime. Each option also shows up under both kebab-case and camelCase keys. None of this has anything to do with the exclusivity constraints I declared. It's just what happens when parser output is typed as a loose dictionary. This is less about missing features than about where the features stop. Once you cross into the return type of .opts() or .parseSync(), the constraints are gone. The compiler sees whatever shape the signature promised, and that shape doesn't know what you declared. Types that know which branch you picked Here's the same CLI in Optique. import { object, or } from "@optique/core/constructs"; import { constant, option } from "@optique/core/primitives"; import { string } from "@optique/core/valueparser"; import { run } from "@optique/run"; const parser = or( object({ auth: constant("token" as const), // [!code highlight] token: option("--token", string({ metavar: "TOKEN" })), }), object({ auth: constant("basic" as const), // [!code highlight] username: option("--username", string({ metavar: "USER" })), password: option("--password", string({ metavar: "PASS" })), }), object({ auth: constant("oauth" as const), // [!code highlight] clientId: option("--oauth-client-id", string({ metavar: "ID" })), clientSecret: option("--oauth-client-secret", string({ metavar: "SECRET" })), }), ); const parsed = run(parser); The shape of the code is different. Instead of declaring each option in isolation and then chaining constraints between them, you describe three complete parsers, one per auth method, and pass them to or(). Each branch is an object() that lists the options belonging to that branch. The constant() calls are discriminators; they don't consume input, they just tag the result. The type parsed gets is: | { readonly auth: "token"; readonly token: string } | { readonly auth: "basic"; readonly username: string; readonly password: string } | { readonly auth: "oauth"; readonly clientId: string; readonly clientSecret: string } That's a discriminated union. When you consume the parsed value, TypeScript knows which fields are available based on auth: switch (parsed.auth) { case "token": await callApiWithToken(parsed.token); break; case "basic": await callApiWithBasic(parsed.username, parsed.password); break; case "oauth": await callApiWithOauth(parsed.clientId, parsed.clientSecret); break; } Inside the "token" case, parsed.username is a type error. Inside "basic", parsed.token is a type error. Every field inside its branch is plain string, not string | undefined, so no non-null assertions are asked for. If you add a fourth auth method next year and forget to update the switch, the compiler complains. The runtime errors follow the parser shape too. A token-plus-username mix fails as a conflict: "--token" "abc" and "--username" "alice" cannot be used together. A basic-auth branch without its password fails as missing input: Missing option --password. No check-ordering coincidences. I want to be clear that this idea isn't mine. Parser combinators have been a standard technique in functional programming for decades, and Haskell's optparse-applicative has been applying them to CLI parsing since 2012. TypeScript's conditional types and discriminated union inference happen to be strong enough that this style of API doesn't ask you to write types by hand. You compose parsers, and the types work out. I started on Optique while trying to express this kind of structure in Fedify's CLI. The closest tool for the shape of problem I had was Cliffy, but it didn't fit[^cliffy]. Commander.js and Yargs couldn't express it the way I wanted either. So here we are. [^cliffy]: Two reasons. Cliffy is Deno-only, which rules it out for a CLI that needs to ship on Node.js and Bun. And even in Deno, Cliffy's API is declarative in roughly the same way as Yargs: options and constraints are declared against a runtime validator, not composed into types. The limits we've just been walking through on Commander.js and Yargs show up in Cliffy too, in a different dialect. CLI arguments are one place values come from; there are others So far this example is unrealistically argv-only. Real CLIs pull some values from environment variables (GITHUB_TOKEN, DATABASE_URL, AWS_REGION), some from config files because nobody wants to retype seventeen flags on every invocation, and some from interactive prompts because secrets shouldn't sit in shell history. On Commander.js or Yargs, each of those sources is usually a separate mechanism. Commander.js has .env() on options, which is fine for that one dimension. Config files get a separate library, or a hand-rolled loader at the top of main(). Interactive prompts are Inquirer.js, wired in somewhere. Each has its own validation path, and reconciling precedence between them is your problem. In 1.0, Optique treats these four sources as one problem. One parser, four sources Take the auth example again, but think about where each value really comes from in practice. API tokens come from environment variables; nobody types GITHUB_TOKEN on the command line. Passwords should be prompted, not left in shell history. OAuth client credentials get saved to a config file because they're project-scoped and you want them versioned (the client secret less so, but let's keep the example simple). Here's how you'd wire that up in Optique 1.0. import { object, or } from "@optique/core/constructs"; import { constant, option } from "@optique/core/primitives"; import { string } from "@optique/core/valueparser"; import { bindEnv, createEnvContext } from "@optique/env"; import { bindConfig, createConfigContext } from "@optique/config"; import { prompt } from "@optique/inquirer"; import { runAsync } from "@optique/run"; import { z } from "zod"; const envCtx = createEnvContext({ prefix: "MYAPP_" }); const cfgCtx = createConfigContext({ schema: z.object({ oauth: z.object({ clientId: z.string().optional(), clientSecret: z.string().optional(), }).optional(), }), }); const parser = or( object({ auth: constant("token" as const), token: bindEnv( // [!code highlight] option("--token", string()), { context: envCtx, key: "TOKEN", parser: string() }, ), }), object({ auth: constant("basic" as const), username: option("--username", string()), password: prompt( // [!code highlight] option("--password", string()), { type: "password", message: "Password:", mask: true }, ), }), object({ auth: constant("oauth" as const), clientId: bindConfig( // [!code highlight] option("--oauth-client-id", string()), { context: cfgCtx, key: (c) => c?.oauth?.clientId }, ), clientSecret: bindConfig( // [!code highlight] option("--oauth-client-secret", string()), { context: cfgCtx, key: (c) => c?.oauth?.clientSecret }, ), }), ); const parsed = await runAsync(parser, { contexts: [envCtx, cfgCtx] }); The parser structure hasn't changed. It's still three branches, each an object() of required fields. What changed is that each field is now wrapped with one or more of bindEnv(), bindConfig(), and prompt(). These wrappers don't alter what a field means; they describe where to look for its value if argv didn't supply one. Resolution order follows wrapper nesting from the inside out. Whatever the user put on the command line wins, then the environment variable, then the config file, then the prompt. If the user gave --token explicitly, MYAPP_TOKEN is ignored for this run. If they didn't but it's set in the environment, the prompt never fires. You can stack all four on a single option if you want; a common pattern is prompt(bindEnv(bindConfig(option(…), …), …), …), which gives you CLI then env then config then prompt on one value. The inferred type is unchanged from the pure-argv version above. The branches are still discriminated by auth. Every field in the selected branch is still string, not string | undefined. The type system has no idea that some of these values took a detour through the filesystem or a TTY before they got to you. A small related feature is fail<T>(). Sometimes a value shouldn't be exposed as a CLI flag at all; maybe it's a secret that should only come from config or env. bindConfig(fail<string>(), { … }) expresses that. The parser has no CLI surface for the field, but it still participates in the type, and it still feeds the config value into the result. The “express constraints through structure” idea earns its keep here. Teams who've wanted this combination on Commander.js or Yargs have historically had to stitch it together: .env() here, a config loader there, an Inquirer.js block inside the action handler, then a pile of if-statements reconciling what to believe when two sources disagree. The reconciliation code is where the bugs live. bindEnv(bindConfig(…)) is that reconciliation, but written once and tested once instead of re-implemented per CLI. Constraints that don't leak There's a subtler problem that I didn't fully appreciate until late in the 0.x cycle. Consider this: option("--port", integer({ min: 1024, max: 65535 })) At the CLI, the parser rejects --port 80. Good. Now wrap it in bindEnv(): bindEnv( option("--port", integer({ min: 1024, max: 65535 })), { context: envCtx, key: "PORT", parser: integer() }, // [!code warning] ) In 0.x, if the user left --port off and set PORT=80 in the environment, the value 80 would flow through untouched. The env-level parser here is integer() without bounds, so it accepted. The CLI-level parser's constraints never ran on values that didn't arrive via argv. Config files had the same hole: a constraint written into the CLI option could be silently bypassed by a different source. This isn't the sort of bug that shows up in the tests you'd normally write. It shows up when somebody sets an environment variable in production and a value that should've been rejected sails through to the rest of the application. 1.0 adds a Parser.validateValue() method that fallback paths use. Environment values, config values, and defaults are now re-validated against the CLI parser's constraints on their way in. The rule is consistent: if a value wouldn't be accepted from argv, it's not accepted from anywhere else either. I'd always described Optique as a “parse, don't validate” library. The phrase is shorthand for an approach where you don't run a separate validation pass after parsing; the parser itself rejects invalid input up front. 0.x mostly delivered on that for argv. 1.0 extends it to every source a value can enter from. When Optique isn't the right choice If your CLI has four flags and no subcommands, use Commander.js. You'll be done faster, your bundle will be smaller, and whoever reviews the PR won't have to learn a new mental model. Optique pays off once you have nontrivial structure: mutually exclusive groups, co-required options, values that arrive from multiple sources, subcommands with per-subcommand option sets. Below that complexity bar, its abstractions are overhead you're paying for no return. If you have a large Commander.js codebase that works, don't port it. The path from imperative configuration to parser combinators isn't a three-hour rewrite, and the bug you introduce during the rewrite is rarely worth the cleaner types afterward. I'd reach for Optique on a new CLI, or on a new subcommand being added to an existing app, not on a retroactive migration. If you need a specific Commander.js or Yargs plugin that does something exotic, Optique's ecosystem is smaller. I expect that to change. I shouldn't pretend it isn't smaller today. There's a more uncomfortable question too. If your CLI is complex enough to benefit from Optique, maybe the CLI itself has too many knobs. Optique helps you build a TV remote where every button is correctly wired and no two conflict, but it doesn't ask whether the remote should have that many buttons in the first place. If you find yourself reaching for deeply nested or() trees, consider simplifying the interface before modeling it more precisely. I think about this sometimes. Some interfaces genuinely need cockpit-level density: database admin tools, deployment pipelines, build systems. Optique is at its best when the complexity is real. When it's accumulated through feature creep, no parser library will save you. 1.0 means I can stop adding footnotes Through most of 0.x, recommending Optique to anyone required footnotes. The env package isn't stable yet. The prompt API might change. runWithConfig is on the way out, use X for now. This constraint doesn't carry across env boundaries, so double-check. The library worked, but the surface I was asking people to commit to was moving. That's what 1.0 changes for me. I can send someone the docs link without a page of caveats first. Documentation is at optique.dev. The 1.0 announcement and changelog are on GitHub. Issues and discussions are the place to tell me where the sharp edges still are.
  • 0 Votazioni
    1 Post
    0 Visualizzazioni
    Using CSS #animations as state machines to remember focus and hover states with #CSS only"I wanted to do this without #JavaScript, just with CSS, because I was going to use it in a demo about the new focusgroup web platform feature. Focusgroup handles a lot of keyboard navigation logic for you, for free, with only an #HTML attribute."https://patrickbrosset.com/articles/2026-03-09-using-css-animations-as-state-machines-to-remember-focus-and-hover-states-with-css-only/#frontend #webdev #a11y #accessibility
  • 0 Votazioni
    1 Post
    0 Visualizzazioni
    More Reasons to Delete Your #Javascript Series: Invoker Commands & Popovers APIsIn the past decade, #CSS and #HTML has made fantastic strides into JS’s territory with multiple new APIs and features, like scroll-driven animations and dialog elements. Last week, I got to experiment with extending my dialog elements to opening without any #JS whatsoever...https://amberweinberg.com/more-reasons-to-delete-your-javascript-series-invoker-commands-popovers-apis/#frontEnd #webdevelopment #webdev #a11y #accessibility
  • 0 Votazioni
    3 Post
    0 Visualizzazioni
    @OrionBelt @yoota presto purtroppo temo di no!
  • 0 Votazioni
    3 Post
    0 Visualizzazioni
    @pmj @kubikpixel Typosquatting or packages with name similarities infected Python and Ruby some years ago. Ugly, but fixable.NPM is controlled by Microslop via Gitslop.
  • 0 Votazioni
    1 Post
    0 Visualizzazioni
    Nobody Gets Fired for Picking JSON, but Maybe They Should?Nobody Gets Fired for Picking JSON, but Maybe They Should?By Miguel Young de la SotaNobody Gets Fired for Picking JSON, but Maybe They Should?JSON is extremely popular but deeply flawed. This article discusses the details of JSON’s design, how it’s used (and misused), and how https://monodes.com/predaelli/2026/03/29/nobody-gets-fired-for-picking-json-but-maybe-they-should/#Javascript #formats #JSON #parsing
  • About to start a solo web dev project.

    Mondo webdev kanban php javascript
    6
    0 Votazioni
    6 Post
    0 Visualizzazioni
    @sephsterPen and paper? 😇https://marcoxbresciani.codeberg.page/kaizen/kanban-journal.html
  • 0 Votazioni
    1 Post
    4 Visualizzazioni
    #Wikipedia hit by self-propagating #JavaScript worm that vandalized pageshttps://www.bleepingcomputer.com/news/security/wikipedia-hit-by-self-propagating-javascript-worm-that-vandalized-pages/#malware #cybersecurity
  • 0 Votazioni
    2 Post
    5 Visualizzazioni
    @cheeaun 😎
  • 0 Votazioni
    1 Post
    11 Visualizzazioni
    A few screenshots of my current progress playing with particles in #WebGPU#screenshotSaturday #webdev #wgsl #particles #javascript #computergraphics #graphicsprogramming #cgi
  • The support for #MathML in browser is often poor.

    Mondo mathml mathjax javascript
    1
    0 Votazioni
    1 Post
    8 Visualizzazioni
    The support for #MathML in browser is often poor. The fact that instead of improving it the developers seem to be instead pressuring the standard bodies to impoverish the language only makes this worse.(I am aware of #MathJax. We shouldn't need a heavy-duty #JavaScript library to solve typesetting problems. That's exactly what browser engines are there fore.)I weep for the status of scientific publishing in the modern world.(This rant is offered by my experience looking for options to make the SPHERIC newsletter available both in PDF and in HTML form. I don't even need complex math usually,, we try to avoid it in contributions because it's hard to typeset math outside of the TeX world. But the fact that in 2026 online the only safe way seems to be to typeset everything manually or rely on huge JavaScript libraries is a little depressing.)
  • 0 Votazioni
    3 Post
    14 Visualizzazioni
    @tanuki no but I'll take a look, thanks for the recommendation 🙇
  • 0 Votazioni
    1 Post
    16 Visualizzazioni
    Un browser funzionante creato con l’AI con 3 milioni di righe di codice: svolta o illusione?📌 Link all'articolo : https://www.redhotcyber.com/post/un-browser-funzionante-creato-con-lai-con-3-milioni-di-righe-di-codice-svolta-o-illusione/#redhotcyber #news #svilupposoftware #browser #gpt5 #intelligenzaartificiale #rust #javascript #webdevelopment #programmazione
  • Learning zero-width joiners like

    Mondo javascript unicode
    1
    1
    0 Votazioni
    1 Post
    10 Visualizzazioni
    Learning zero-width joiners like#javascript #unicode
  • 0 Votazioni
    4 Post
    23 Visualizzazioni
    @celesteh@hachyderm.io @jnkrtech@treehouse.systems I think it's partly because JavaScript is more commonly used for building consumer products compared to other languages, so there's a higher proportion of developers focused on shipping products rather than diving deep into infrastructure. When you're building a product, you naturally extract just enough to solve your immediate problem and move on. Languages like Rust tend to attract more developers interested in tooling and infrastructure work, where yak shaving is almost expected. The ecosystem reflects the priorities of its community.
  • 0 Votazioni
    2 Post
    3 Visualizzazioni
    @ansuz The old-school way was to notify people at every opporty that W3Schools was full of mistakes and direct them to MDN.
  • 0 Votazioni
    1 Post
    15 Visualizzazioni
    Consider Git's -C option: git -C /path/to/repo checkout <TAB> When you hit <kbd>Tab</kbd>, Git completes branch names from /path/to/repo, not your current directory. The completion is context-aware—it depends on the value of another option. Most CLI parsers can't do this. They treat each option in isolation, so completion for --branch has no way of knowing the --repo value. You end up with two unpleasant choices: either show completions for all possible branches across all repositories (useless), or give up on completion entirely for these options. Optique 0.10.0 introduces a dependency system that solves this problem while preserving full type safety. Static dependencies with or() Optique already handles certain kinds of dependent options via the or() combinator: import { flag, object, option, or, string } from "@optique/core"; const outputOptions = or( object({ json: flag("--json"), pretty: flag("--pretty"), }), object({ csv: flag("--csv"), delimiter: option("--delimiter", string()), }), ); TypeScript knows that if json is true, you'll have a pretty field, and if csv is true, you'll have a delimiter field. The parser enforces this at runtime, and shell completion will suggest --pretty only when --json is present. This works well when the valid combinations are known at definition time. But it can't handle cases where valid values depend on runtime input—like branch names that vary by repository. Runtime dependencies Common scenarios include: A deployment CLI where --environment affects which services are available A database tool where --connection affects which tables can be completed A cloud CLI where --project affects which resources are shown In each case, you can't know the valid values until you know what the user typed for the dependency option. Optique 0.10.0 introduces dependency() and derive() to handle exactly this. The dependency system The core idea is simple: mark one option as a dependency source, then create derived parsers that use its value. import { choice, dependency, message, object, option, string, } from "@optique/core"; function getRefsFromRepo(repoPath: string): string[] { // In real code, this would read from the Git repository return ["main", "develop", "feature/login"]; } // Mark as a dependency source const repoParser = dependency(string()); // Create a derived parser const refParser = repoParser.derive({ metavar: "REF", factory: (repoPath) => { const refs = getRefsFromRepo(repoPath); return choice(refs); }, defaultValue: () => ".", }); const parser = object({ repo: option("--repo", repoParser, { description: message`Path to the repository`, }), ref: option("--ref", refParser, { description: message`Git reference`, }), }); The factory function is where the dependency gets resolved. It receives the actual value the user provided for --repo and returns a parser that validates against refs from that specific repository. Under the hood, Optique uses a three-phase parsing strategy: Parse all options in a first pass, collecting dependency values Call factory functions with the collected values to create concrete parsers Re-parse derived options using those dynamically created parsers This means both validation and completion work correctly—if the user has already typed --repo /some/path, the --ref completion will show refs from that path. Repository-aware completion with @optique/git The @optique/git package provides async value parsers that read from Git repositories. Combined with the dependency system, you can build CLIs with repository-aware completion: import { command, dependency, message, object, option, string, } from "@optique/core"; import { gitBranch } from "@optique/git"; const repoParser = dependency(string()); const branchParser = repoParser.deriveAsync({ metavar: "BRANCH", factory: (repoPath) => gitBranch({ dir: repoPath }), defaultValue: () => ".", }); const checkout = command( "checkout", object({ repo: option("--repo", repoParser, { description: message`Path to the repository`, }), branch: option("--branch", branchParser, { description: message`Branch to checkout`, }), }), ); Now when you type my-cli checkout --repo /path/to/project --branch <TAB>, the completion will show branches from /path/to/project. The defaultValue of "." means that if --repo isn't specified, it falls back to the current directory. Multiple dependencies Sometimes a parser needs values from multiple options. The deriveFrom() function handles this: import { choice, dependency, deriveFrom, message, object, option, } from "@optique/core"; function getAvailableServices(env: string, region: string): string[] { return [`${env}-api-${region}`, `${env}-web-${region}`]; } const envParser = dependency(choice(["dev", "staging", "prod"] as const)); const regionParser = dependency(choice(["us-east", "eu-west"] as const)); const serviceParser = deriveFrom({ dependencies: [envParser, regionParser] as const, metavar: "SERVICE", factory: (env, region) => { const services = getAvailableServices(env, region); return choice(services); }, defaultValues: () => ["dev", "us-east"] as const, }); const parser = object({ env: option("--env", envParser, { description: message`Deployment environment`, }), region: option("--region", regionParser, { description: message`Cloud region`, }), service: option("--service", serviceParser, { description: message`Service to deploy`, }), }); The factory receives values in the same order as the dependency array. If some dependencies aren't provided, Optique uses the defaultValues. Async support Real-world dependency resolution often involves I/O—reading from Git repositories, querying APIs, accessing databases. Optique provides async variants for these cases: import { dependency, string } from "@optique/core"; import { gitBranch } from "@optique/git"; const repoParser = dependency(string()); const branchParser = repoParser.deriveAsync({ metavar: "BRANCH", factory: (repoPath) => gitBranch({ dir: repoPath }), defaultValue: () => ".", }); The @optique/git package uses isomorphic-git under the hood, so gitBranch(), gitTag(), and gitRef() all work in both Node.js and Deno. There's also deriveSync() for when you need to be explicit about synchronous behavior, and deriveFromAsync() for multiple async dependencies. Wrapping up The dependency system lets you build CLIs where options are aware of each other—not just for validation, but for shell completion too. You get type safety throughout: TypeScript knows the relationship between your dependency sources and derived parsers, and invalid combinations are caught at compile time. This is particularly useful for tools that interact with external systems where the set of valid values isn't known until runtime. Git repositories, cloud providers, databases, container registries—anywhere the completion choices depend on context the user has already provided. This feature will be available in Optique 0.10.0. To try the pre-release: deno add jsr:@optique/core@0.10.0-dev.311 Or with npm: npm install @optique/core@0.10.0-dev.311 See the documentation for more details.
  • Heyyy, I'm Novem.

    Mondo introduction digitalart javascript gamedev music
    4
    1
    0 Votazioni
    4 Post
    18 Visualizzazioni
    @NovemDecimal the alt-text is ok. No worries.
  • 0 Votazioni
    1 Post
    15 Visualizzazioni
    We've all been there. You start a quick TypeScript CLI with process.argv.slice(2), add a couple of options, and before you know it you're drowning in if/else blocks and parseInt calls. It works, until it doesn't. In this guide, we'll move from manual argument parsing to a fully type-safe CLI with subcommands, mutually exclusive options, and shell completion. The naïve approach: parsing process.argv Let's start with the most basic approach. Say we want a greeting program that takes a name and optionally repeats the greeting: // greet.ts const args = process.argv.slice(2); let name: string | undefined; let count = 1; for (let i = 0; i < args.length; i++) { if (args[i] === "--name" || args[i] === "-n") { name = args[++i]; } else if (args[i] === "--count" || args[i] === "-c") { count = parseInt(args[++i], 10); } } if (!name) { console.error("Error: --name is required"); process.exit(1); } for (let i = 0; i < count; i++) { console.log(`Hello, ${name}!`); } Run node greet.js --name Alice --count 3 and you'll get three greetings. But this approach is fragile. count could be NaN if someone passes --count foo, and we'd silently proceed. There's no help text. If someone passes --name without a value, we'd read the next option as the name. And the boilerplate grows fast with each new option. The traditional libraries You've probably heard of Commander.js and Yargs. They've been around for years and solve the basic problems: // With Commander.js import { program } from "commander"; program .requiredOption("-n, --name <n>", "Name to greet") .option("-c, --count <number>", "Number of times to greet", "1") .parse(); const opts = program.opts(); These libraries handle help text, option parsing, and basic validation. But they were designed before TypeScript became mainstream, and the type safety is bolted on rather than built in. The real problem shows up when you need mutually exclusive options. Say your CLI works either in "server mode" (with --port and --host) or "client mode" (with --url). With these libraries, you end up with a config object where all options are potentially present, and you're left writing runtime checks to ensure the user didn't mix incompatible flags. TypeScript can't help you because the types don't reflect the actual constraints. Enter Optique Optique takes a different approach. Instead of configuring options declaratively, you build parsers by composing smaller parsers together. The types flow naturally from this composition, so TypeScript always knows exactly what shape your parsed result will have. Optique works across JavaScript runtimes: Node.js, Deno, and Bun are all supported. The core parsing logic has no runtime-specific dependencies, so you can even use it in browsers if you need to parse CLI-like arguments in a web context. Let's rebuild our greeting program: import { object } from "@optique/core/constructs"; import { option } from "@optique/core/primitives"; import { integer, string } from "@optique/core/valueparser"; import { withDefault } from "@optique/core/modifiers"; import { run } from "@optique/run"; const parser = object({ name: option("-n", "--name", string()), count: withDefault(option("-c", "--count", integer({ min: 1 })), 1), }); const config = run(parser); // config is typed as { name: string; count: number } for (let i = 0; i < config.count; i++) { console.log(`Hello, ${config.name}!`); } Types are inferred automatically. config.name is string, not string | undefined. config.count is number, guaranteed to be at least 1. Validation is built in: integer({ min: 1 }) rejects non-integers and values below 1 with clear error messages. Help text is generated automatically, and the run() function handles errors and exits with appropriate codes. Install it with your package manager of choice: npm add @optique/core @optique/run # or: pnpm add, yarn add, bun add, deno add jsr:@optique/core jsr:@optique/run Building up: a file converter Let's build something more realistic: a file converter that reads from an input file, converts to a specified format, and writes to an output file. import { object } from "@optique/core/constructs"; import { optional, withDefault } from "@optique/core/modifiers"; import { argument, option } from "@optique/core/primitives"; import { choice, string } from "@optique/core/valueparser"; import { run } from "@optique/run"; const parser = object({ input: argument(string({ metavar: "INPUT" })), output: option("-o", "--output", string({ metavar: "FILE" })), format: withDefault( option("-f", "--format", choice(["json", "yaml", "toml"])), "json" ), pretty: option("-p", "--pretty"), verbose: option("-v", "--verbose"), }); const config = run(parser, { help: "both", version: { mode: "both", value: "1.0.0" }, }); // config.input: string // config.output: string // config.format: "json" | "yaml" | "toml" // config.pretty: boolean // config.verbose: boolean The type of config.format isn't just string. It's the union "json" | "yaml" | "toml". TypeScript will catch typos like config.format === "josn" at compile time. The choice() parser is useful for any option with a fixed set of valid values: log levels, output formats, environment names, and so on. You get both runtime validation (invalid values are rejected with helpful error messages) and compile-time checking (TypeScript knows the exact set of possible values). Mutually exclusive options Now let's tackle the case that trips up most CLI libraries: mutually exclusive options. Say our tool can either run as a server or connect as a client, but not both: import { object, or } from "@optique/core/constructs"; import { withDefault } from "@optique/core/modifiers"; import { argument, constant, option } from "@optique/core/primitives"; import { integer, string, url } from "@optique/core/valueparser"; import { run } from "@optique/run"; const parser = or( // Server mode object({ mode: constant("server"), port: option("-p", "--port", integer({ min: 1, max: 65535 })), host: withDefault(option("-h", "--host", string()), "0.0.0.0"), }), // Client mode object({ mode: constant("client"), url: argument(url()), }), ); const config = run(parser); The or() combinator tries each alternative in order. The first one that successfully parses wins. The constant() parser adds a literal value to the result without consuming any input, which serves as a discriminator. TypeScript infers a discriminated union: type Config = | { mode: "server"; port: number; host: string } | { mode: "client"; url: URL }; Now you can write type-safe code that handles each mode: if (config.mode === "server") { console.log(`Starting server on ${config.host}:${config.port}`); } else { console.log(`Connecting to ${config.url.hostname}`); } Try accessing config.url in the server branch. TypeScript won't let you. The compiler knows that when mode is "server", only port and host exist. This is the key difference from configuration-based libraries. With Commander or Yargs, you'd get a type like { port?: number; host?: string; url?: string } and have to check at runtime which combination of fields is actually present. With Optique, the types match the actual constraints of your CLI. Subcommands For larger tools, you'll want subcommands. Optique handles this with the command() parser: import { object, or } from "@optique/core/constructs"; import { optional } from "@optique/core/modifiers"; import { argument, command, constant, option } from "@optique/core/primitives"; import { string } from "@optique/core/valueparser"; import { run } from "@optique/run"; const parser = or( command("add", object({ action: constant("add"), key: argument(string({ metavar: "KEY" })), value: argument(string({ metavar: "VALUE" })), })), command("remove", object({ action: constant("remove"), key: argument(string({ metavar: "KEY" })), })), command("list", object({ action: constant("list"), pattern: optional(option("-p", "--pattern", string())), })), ); const result = run(parser, { help: "both" }); switch (result.action) { case "add": console.log(`Adding ${result.key}=${result.value}`); break; case "remove": console.log(`Removing ${result.key}`); break; case "list": console.log(`Listing${result.pattern ? ` (filter: ${result.pattern})` : ""}`); break; } Each subcommand gets its own help text. Run myapp add --help and you'll see only the options relevant to add. Run myapp --help and you'll see a summary of all available commands. The pattern here is the same as mutually exclusive options: or() to combine alternatives, constant() to add a discriminator. This consistency is one of Optique's strengths. Once you understand the basic combinators, you can build arbitrarily complex CLI structures by composing them. Shell completion Optique has built-in shell completion for Bash, zsh, fish, PowerShell, and Nushell. Enable it by passing completion: "both" to run(): const config = run(parser, { help: "both", version: { mode: "both", value: "1.0.0" }, completion: "both", }); Users can then generate completion scripts: $ myapp --completion bash >> ~/.bashrc $ myapp --completion zsh >> ~/.zshrc $ myapp --completion fish > ~/.config/fish/completions/myapp.fish The completions are context-aware. They know about your subcommands, option values, and choice() alternatives. Type myapp --format <TAB> and you'll see json, yaml, toml as suggestions. Type myapp a<TAB> and it'll complete to myapp add. Completion support is often an afterthought in CLI tools, but it makes a real difference in user experience. With Optique, you get it essentially for free. Integrating with validation libraries Already using Zod for validation in your project? The @optique/zod package lets you reuse those schemas as CLI value parsers: import { z } from "zod"; import { zod } from "@optique/zod"; import { option } from "@optique/core/primitives"; const email = option("--email", zod(z.string().email())); const port = option("--port", zod(z.coerce.number().int().min(1).max(65535))); Your existing validation logic just works. The Zod error messages are passed through to the user, so you get the same helpful feedback you're used to. Prefer Valibot? The @optique/valibot package works the same way: import * as v from "valibot"; import { valibot } from "@optique/valibot"; import { option } from "@optique/core/primitives"; const email = option("--email", valibot(v.pipe(v.string(), v.email()))); Valibot's bundle size is significantly smaller than Zod's (~10KB vs ~52KB), which can matter for CLI tools where startup time is noticeable. Tips A few things I've learned building CLIs with Optique: Start simple. Begin with object() and basic options. Add or() for mutually exclusive groups only when you need them. It's easy to over-engineer CLI parsers. Use descriptive metavars. Instead of string(), write string({ metavar: "FILE" }) or string({ metavar: "URL" }). The metavar appears in help text and error messages, so it's worth the extra few characters. Leverage withDefault(). It's better than making options optional and checking for undefined everywhere. Your code becomes cleaner when you can assume values are always present. Test your parser. Optique's core parsing functions work without process.argv, so you can unit test your parser logic: import { parse } from "@optique/core/parser"; const result = parse(parser, ["--name", "Alice", "--count", "3"]); if (result.success) { assert.equal(result.value.name, "Alice"); assert.equal(result.value.count, 3); } This is especially valuable for complex parsers with many edge cases. Going further We've covered the fundamentals, but Optique has more to offer: Async value parsers for validating against external sources, like checking if a Git branch exists or if a URL is reachable Path validation with path() for checking file existence, directory structure, and file extensions Custom value parsers for domain-specific types (though Zod/Valibot integration is usually easier) Reusable option groups with merge() for sharing common options across subcommands The @optique/temporal package for parsing dates and times using the Temporal API Check out the documentation for the full picture. The tutorial walks through the concepts in more depth, and the cookbook has patterns for common scenarios. That's it Building CLIs in TypeScript doesn't have to mean fighting with types or writing endless runtime validation. Optique lets you express constraints in a way that TypeScript actually understands, so the compiler catches mistakes before they reach production. The source is on GitHub, and packages are available on both npm and JSR. Questions or feedback? Find me on the fediverse or open an issue on the GitHub repo.
  • 0 Votazioni
    1 Post
    15 Visualizzazioni
    It's 2 AM. Something is wrong in production. Users are complaining, but you're not sure what's happening—your only clues are a handful of console.log statements you sprinkled around during development. Half of them say things like “here” or “this works.” The other half dump entire objects that scroll off the screen. Good luck. We've all been there. And yet, setting up “proper” logging often feels like overkill. Traditional logging libraries like winston or Pino come with their own learning curves, configuration formats, and assumptions about how you'll deploy your app. If you're working with edge functions or trying to keep your bundle small, adding a logging library can feel like bringing a sledgehammer to hang a picture frame. I'm a fan of the “just enough” approach—more than raw console.log, but without the weight of a full-blown logging framework. We'll start from console.log(), understand its real limitations (not the exaggerated ones), and work toward a setup that's actually useful. I'll be using LogTape for the examples—it's a zero-dependency logging library that works across Node.js, Deno, Bun, and edge functions, and stays out of your way when you don't need it. Starting with console methods—and where they fall short The console object is JavaScript's great equalizer. It's built-in, it works everywhere, and it requires zero setup. You even get basic severity levels: console.debug(), console.info(), console.warn(), and console.error(). In browser DevTools and some terminal environments, these show up with different colors or icons. console.debug("Connecting to database..."); console.info("Server started on port 3000"); console.warn("Cache miss for user 123"); console.error("Failed to process payment"); For small scripts or quick debugging, this is perfectly fine. But once your application grows beyond a few files, the cracks start to show: No filtering without code changes. Want to hide debug messages in production? You'll need to wrap every console.debug() call in a conditional, or find-and-replace them all. There's no way to say “show me only warnings and above” at runtime. Everything goes to the console. What if you want to write logs to a file? Send errors to Sentry? Stream logs to CloudWatch? You'd have to replace every console.* call with something else—and hope you didn't miss any. No context about where logs come from. When your app has dozens of modules, a log message like “Connection failed” doesn't tell you much. Was it the database? The cache? A third-party API? You end up prefixing every message manually: console.error("[database] Connection failed"). No structured data. Modern log analysis tools work best with structured data (JSON). But console.log("User logged in", { userId: 123 }) just prints User logged in { userId: 123 } as a string—not very useful for querying later. Libraries pollute your logs. If you're using a library that logs with console.*, those messages show up whether you want them or not. And if you're writing a library, your users might not appreciate unsolicited log messages. What you actually need from a logging system Before diving into code, let's think about what would actually solve the problems above. Not a wish list of features, but the practical stuff that makes a difference when you're debugging at 2 AM or trying to understand why requests are slow. Log levels with filtering A logging system should let you categorize messages by severity—trace, debug, info, warning, error, fatal—and then filter them based on what you need. During development, you want to see everything. In production, maybe just warnings and above. The key is being able to change this without touching your code. Categories When your app grows beyond a single file, you need to know where logs are coming from. A good logging system lets you tag logs with categories like ["my-app", "database"] or ["my-app", "auth", "oauth"]. Even better, it lets you set different log levels for different categories—maybe you want debug logs from the database module but only warnings from everything else. Sinks (multiple output destinations) “Sink” is just a fancy word for “where logs go.” You might want logs to go to the console during development, to files in production, and to an external service like Sentry or CloudWatch for errors. A good logging system lets you configure multiple sinks and route different logs to different destinations. Structured logging Instead of logging strings, you log objects with properties. This makes logs machine-readable and queryable: // Instead of this: logger.info("User 123 logged in from 192.168.1.1"); // You do this: logger.info("User logged in", { userId: 123, ip: "192.168.1.1" }); Now you can search for all logs where userId === 123 or filter by IP address. Context for request tracing In a web server, you often want all logs from a single request to share a common identifier (like a request ID). This makes it possible to trace a request's journey through your entire system. Getting started with LogTape There are plenty of logging libraries out there. winston has been around forever and has a plugin for everything. Pino is fast and outputs JSON. bunyan, log4js, signale—the list goes on. So why LogTape? A few reasons stood out to me: Zero dependencies. Not “few dependencies”—actually zero. In an era where a single npm install can pull in hundreds of packages, this matters for security, bundle size, and not having to wonder why your lockfile just changed. Works everywhere. The same code runs on Node.js, Deno, Bun, browsers, and edge functions like Cloudflare Workers. No polyfills, no conditional imports, no “this feature only works on Node.” Doesn't force itself on users. If you're writing a library, you can add logging without your users ever knowing—unless they want to see the logs. This is a surprisingly rare feature. Let's set it up: npm add @logtape/logtape # npm pnpm add @logtape/logtape # pnpm yarn add @logtape/logtape # Yarn deno add jsr:@logtape/logtape # Deno bun add @logtape/logtape # Bun Configuration happens once, at your application's entry point: import { configure, getConsoleSink, getLogger } from "@logtape/logtape"; await configure({ sinks: { console: getConsoleSink(), // Where logs go }, loggers: [ { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // What to log ], }); // Now you can log from anywhere in your app: const logger = getLogger(["my-app", "server"]); logger.info`Server started on port 3000`; logger.debug`Request received: ${{ method: "GET", path: "/api/users" }}`; Notice a few things: Configuration is explicit. You decide where logs go (sinks) and which logs to show (lowestLevel). Categories are hierarchical. The logger ["my-app", "server"] inherits settings from ["my-app"]. Template literals work. You can use backticks for a natural logging syntax. Categories and filtering: Controlling log verbosity Here's a scenario: you're debugging a database issue. You want to see every query, every connection attempt, every retry. But you don't want to wade through thousands of HTTP request logs to find them. Categories let you solve this. Instead of one global log level, you can set different verbosity for different parts of your application. await configure({ sinks: { console: getConsoleSink(), }, loggers: [ { category: ["my-app"], lowestLevel: "info", sinks: ["console"] }, // Default: info and above { category: ["my-app", "database"], lowestLevel: "debug", sinks: ["console"] }, // DB module: show debug too ], }); Now when you log from different parts of your app: // In your database module: const dbLogger = getLogger(["my-app", "database"]); dbLogger.debug`Executing query: ${sql}`; // This shows up // In your HTTP module: const httpLogger = getLogger(["my-app", "http"]); httpLogger.debug`Received request`; // This is filtered out (below "info") httpLogger.info`GET /api/users 200`; // This shows up Controlling third-party library logs If you're using libraries that also use LogTape, you can control their logs separately: await configure({ sinks: { console: getConsoleSink() }, loggers: [ { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // Only show warnings and above from some-library { category: ["some-library"], lowestLevel: "warning", sinks: ["console"] }, ], }); The root logger Sometimes you want a catch-all configuration. The root logger (empty category []) catches everything: await configure({ sinks: { console: getConsoleSink() }, loggers: [ // Catch all logs at info level { category: [], lowestLevel: "info", sinks: ["console"] }, // But show debug for your app { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, ], }); Log levels and when to use them LogTape has six log levels. Choosing the right one isn't just about severity—it's about who needs to see the message and when. Level When to use it trace Very detailed diagnostic info. Loop iterations, function entry/exit. Usually only enabled when hunting a specific bug. debug Information useful during development. Variable values, state changes, flow control decisions. info Normal operational messages. “Server started,” “User logged in,” “Job completed.” warning Something unexpected happened, but the app can continue. Deprecated API usage, retry attempts, missing optional config. error Something failed. An operation couldn't complete, but the app is still running. fatal The app is about to crash or is in an unrecoverable state. const logger = getLogger(["my-app"]); logger.trace`Entering processUser function`; logger.debug`Processing user ${{ userId: 123 }}`; logger.info`User successfully created`; logger.warn`Rate limit approaching: ${980}/1000 requests`; logger.error`Failed to save user: ${error.message}`; logger.fatal`Database connection lost, shutting down`; A good rule of thumb: in production, you typically run at info or warning level. During development or when debugging, you drop down to debug or trace. Structured logging: Beyond plain text At some point, you'll want to search your logs. “Show me all errors from the payment service in the last hour.” “Find all requests from user 12345.” “What's the average response time for the /api/users endpoint?” If your logs are plain text strings, these queries are painful. You end up writing regexes, hoping the log format is consistent, and cursing past-you for not thinking ahead. Structured logging means attaching data to your logs as key-value pairs, not just embedding them in strings. This makes logs machine-readable and queryable. LogTape supports two syntaxes for this: Template literals (great for simple messages) const userId = 123; const action = "login"; logger.info`User ${userId} performed ${action}`; Message templates with properties (great for structured data) logger.info("User performed action", { userId: 123, action: "login", ip: "192.168.1.1", timestamp: new Date().toISOString(), }); You can reference properties in your message using placeholders: logger.info("User {userId} logged in from {ip}", { userId: 123, ip: "192.168.1.1", }); // Output: User 123 logged in from 192.168.1.1 Nested property access LogTape supports dot notation and array indexing in placeholders: logger.info("Order {order.id} placed by {order.customer.name}", { order: { id: "ORD-001", customer: { name: "Alice", email: "alice@example.com" }, }, }); logger.info("First item: {items[0].name}", { items: [{ name: "Widget", price: 9.99 }], }); Machine-readable output with JSON Lines For production, you often want logs as JSON (one object per line). LogTape has a built-in formatter for this: import { configure, getConsoleSink, jsonLinesFormatter } from "@logtape/logtape"; await configure({ sinks: { console: getConsoleSink({ formatter: jsonLinesFormatter }), }, loggers: [ { category: [], lowestLevel: "info", sinks: ["console"] }, ], }); Output: {"@timestamp":"2026-01-15T10:30:00.000Z","level":"INFO","message":"User logged in","logger":"my-app","properties":{"userId":123}} Sending logs to different destinations (sinks) So far we've been sending everything to the console. That's fine for development, but in production you'll likely want logs to go elsewhere—or to multiple places at once. Think about it: console output disappears when the process restarts. If your server crashes at 3 AM, you want those logs to be somewhere persistent. And when an error occurs, you might want it to show up in your error tracking service immediately, not just sit in a log file waiting for someone to grep through it. This is where sinks come in. A sink is just a function that receives log records and does something with them. LogTape comes with several built-in sinks, and creating your own is trivial. Console sink The simplest sink—outputs to the console: import { getConsoleSink } from "@logtape/logtape"; const consoleSink = getConsoleSink(); File sink For writing logs to files, install the @logtape/file package: npm add @logtape/file import { getFileSink, getRotatingFileSink } from "@logtape/file"; // Simple file sink const fileSink = getFileSink("app.log"); // Rotating file sink (rotates when file reaches 10MB, keeps 5 old files) const rotatingFileSink = getRotatingFileSink("app.log", { maxSize: 10 * 1024 * 1024, // 10MB maxFiles: 5, }); Why rotating files? Without rotation, your log file grows indefinitely until it fills up the disk. With rotation, old logs are automatically archived and eventually deleted, keeping disk usage under control. This is especially important for long-running servers. External services For production systems, you often want logs to go to specialized services that provide search, alerting, and visualization. LogTape has packages for popular services: // OpenTelemetry (for observability platforms like Jaeger, Honeycomb, Datadog) import { getOpenTelemetrySink } from "@logtape/otel"; // Sentry (for error tracking with stack traces and context) import { getSentrySink } from "@logtape/sentry"; // AWS CloudWatch Logs (for AWS-native log aggregation) import { getCloudWatchLogsSink } from "@logtape/cloudwatch-logs"; The OpenTelemetry sink is particularly useful if you're already using OpenTelemetry for tracing—your logs will automatically correlate with your traces, making debugging distributed systems much easier. Multiple sinks Here's where things get interesting. You can send different logs to different destinations based on their level or category: await configure({ sinks: { console: getConsoleSink(), file: getFileSink("app.log"), errors: getSentrySink(), }, loggers: [ { category: [], lowestLevel: "info", sinks: ["console", "file"] }, // Everything to console + file { category: [], lowestLevel: "error", sinks: ["errors"] }, // Errors also go to Sentry ], }); Notice that a log record can go to multiple sinks. An error log in this configuration goes to the console, the file, and Sentry. This lets you have comprehensive local logs while also getting immediate alerts for critical issues. Custom sinks Sometimes you need to send logs somewhere that doesn't have a pre-built sink. Maybe you have an internal logging service, or you want to send logs to a Slack channel, or store them in a database. A sink is just a function that takes a LogRecord. That's it: import type { Sink } from "@logtape/logtape"; const slackSink: Sink = (record) => { // Only send errors and fatals to Slack if (record.level === "error" || record.level === "fatal") { fetch("https://hooks.slack.com/services/YOUR/WEBHOOK/URL", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ text: `[${record.level.toUpperCase()}] ${record.message.join("")}`, }), }); } }; The simplicity of sink functions means you can integrate LogTape with virtually any logging backend in just a few lines of code. Request tracing with contexts Here's a scenario you've probably encountered: a user reports an error, you check the logs, and you find a sea of interleaved messages from dozens of concurrent requests. Which log lines belong to the user's request? Good luck figuring that out. This is where request tracing comes in. The idea is simple: assign a unique identifier to each request, and include that identifier in every log message produced while handling that request. Now you can filter your logs by request ID and see exactly what happened, in order, for that specific request. LogTape supports this through contexts—a way to attach properties to log messages without passing them around explicitly. Explicit context The simplest approach is to create a logger with attached properties using .with(): function handleRequest(req: Request) { const requestId = crypto.randomUUID(); const logger = getLogger(["my-app", "http"]).with({ requestId }); logger.info`Request received`; // Includes requestId automatically processRequest(req, logger); logger.info`Request completed`; // Also includes requestId } This works well when you're passing the logger around explicitly. But what about code that's deeper in your call stack? What about code in libraries that don't know about your logger instance? Implicit context This is where implicit contexts shine. Using withContext(), you can set properties that automatically appear in all log messages within a callback—even in nested function calls, async operations, and third-party libraries (as long as they use LogTape). First, enable implicit contexts in your configuration: import { configure, getConsoleSink } from "@logtape/logtape"; import { AsyncLocalStorage } from "node:async_hooks"; await configure({ sinks: { console: getConsoleSink() }, loggers: [ { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, ], contextLocalStorage: new AsyncLocalStorage(), }); Then use withContext() in your request handler: import { withContext, getLogger } from "@logtape/logtape"; function handleRequest(req: Request) { const requestId = crypto.randomUUID(); return withContext({ requestId }, async () => { // Every log message in this callback includes requestId—automatically const logger = getLogger(["my-app"]); logger.info`Processing request`; await validateInput(req); // Logs here include requestId await processBusinessLogic(req); // Logs here too await saveToDatabase(req); // And here logger.info`Request complete`; }); } The magic is that validateInput, processBusinessLogic, and saveToDatabase don't need to know anything about the request ID. They just call getLogger() and log normally, and the request ID appears in their logs automatically. This works even across async boundaries—the context follows the execution flow, not the call stack. This is incredibly powerful for debugging. When something goes wrong, you can search for the request ID and see every log message from every module that was involved in handling that request. Framework integrations Setting up request tracing manually can be tedious. LogTape has dedicated packages for popular frameworks that handle this automatically: // Express import { expressLogger } from "@logtape/express"; app.use(expressLogger()); // Fastify import { getLogTapeFastifyLogger } from "@logtape/fastify"; const app = Fastify({ loggerInstance: getLogTapeFastifyLogger() }); // Hono import { honoLogger } from "@logtape/hono"; app.use(honoLogger()); // Koa import { koaLogger } from "@logtape/koa"; app.use(koaLogger()); These middlewares automatically generate request IDs, set up implicit contexts, and log request/response information. You get comprehensive request logging with a single line of code. Using LogTape in libraries vs applications If you've ever used a library that spams your console with unwanted log messages, you know how annoying it can be. And if you've ever tried to add logging to your own library, you've faced a dilemma: should you use console.log() and annoy your users? Require them to install and configure a specific logging library? Or just... not log anything? LogTape solves this with its library-first design. Libraries can add as much logging as they want, and it costs their users nothing unless they explicitly opt in. If you're writing a library The rule is simple: use getLogger() to log, but never call configure(). Configuration is the application's responsibility, not the library's. // my-library/src/database.ts import { getLogger } from "@logtape/logtape"; const logger = getLogger(["my-library", "database"]); export function connect(url: string) { logger.debug`Connecting to ${url}`; // ... connection logic ... logger.info`Connected successfully`; } What happens when someone uses your library? If they haven't configured LogTape, nothing happens. The log calls are essentially no-ops—no output, no errors, no performance impact. Your library works exactly as if the logging code wasn't there. If they have configured LogTape, they get full control. They can see your library's debug logs if they're troubleshooting an issue, or silence them entirely if they're not interested. They decide, not you. This is fundamentally different from using console.log() in a library. With console.log(), your users have no choice—they see your logs whether they want to or not. With LogTape, you give them the power to decide. If you're writing an application You configure LogTape once in your entry point. This single configuration controls logging for your entire application, including any libraries that use LogTape: await configure({ sinks: { console: getConsoleSink() }, loggers: [ { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // Your app: verbose { category: ["my-library"], lowestLevel: "warning", sinks: ["console"] }, // Library: quiet { category: ["noisy-library"], lowestLevel: "fatal", sinks: [] }, // That one library: silent ], }); This separation of concerns—libraries log, applications configure—makes for a much healthier ecosystem. Library authors can add detailed logging for debugging without worrying about annoying their users. Application developers can tune logging to their needs without digging through library code. Migrating from another logger? If your application already uses winston, Pino, or another logging library, you don't have to migrate everything at once. LogTape provides adapters that route LogTape logs to your existing logging setup: import { install } from "@logtape/adaptor-winston"; import winston from "winston"; install(winston.createLogger({ /* your existing config */ })); This is particularly useful when you want to use a library that uses LogTape, but you're not ready to switch your whole application over. The library's logs will flow through your existing winston (or Pino) configuration, and you can migrate gradually if you choose to. Production considerations Development and production have different needs. During development, you want verbose logs, pretty formatting, and immediate feedback. In production, you care about performance, reliability, and not leaking sensitive data. Here are some things to keep in mind. Non-blocking mode By default, logging is synchronous—when you call logger.info(), the message is written to the sink before the function returns. This is fine for development, but in a high-throughput production environment, the I/O overhead of writing every log message can add up. Non-blocking mode buffers log messages and writes them in the background: const consoleSink = getConsoleSink({ nonBlocking: true }); const fileSink = getFileSink("app.log", { nonBlocking: true }); The tradeoff is that logs might be slightly delayed, and if your process crashes, some buffered logs might be lost. But for most production workloads, the performance benefit is worth it. Sensitive data redaction Logs have a way of ending up in unexpected places—log aggregation services, debugging sessions, support tickets. If you're logging request data, user information, or API responses, you might accidentally expose sensitive information like passwords, API keys, or personal data. LogTape's @logtape/redaction package helps you catch these before they become a problem: import { redactByPattern, EMAIL_ADDRESS_PATTERN, CREDIT_CARD_NUMBER_PATTERN, type RedactionPattern, } from "@logtape/redaction"; import { defaultConsoleFormatter, configure, getConsoleSink } from "@logtape/logtape"; const BEARER_TOKEN_PATTERN: RedactionPattern = { pattern: /Bearer [A-Za-z0-9\-._~+\/]+=*/g, replacement: "[REDACTED]", }; const formatter = redactByPattern(defaultConsoleFormatter, [ EMAIL_ADDRESS_PATTERN, CREDIT_CARD_NUMBER_PATTERN, BEARER_TOKEN_PATTERN, ]); await configure({ sinks: { console: getConsoleSink({ formatter }), }, // ... }); With this configuration, email addresses, credit card numbers, and bearer tokens are automatically replaced with [REDACTED] in your log output. The @logtape/redaction package comes with built-in patterns for common sensitive data types, and you can define custom patterns for anything else. It's not foolproof—you should still be mindful of what you log—but it provides a safety net. See the redaction documentation for more patterns and field-based redaction. Edge functions and serverless Edge functions (Cloudflare Workers, Vercel Edge Functions, etc.) have a unique constraint: they can be terminated immediately after returning a response. If you have buffered logs that haven't been flushed yet, they'll be lost. The solution is to explicitly flush logs before returning: import { configure, dispose } from "@logtape/logtape"; export default { async fetch(request, env, ctx) { await configure({ /* ... */ }); // ... handle request ... ctx.waitUntil(dispose()); // Flush logs before worker terminates return new Response("OK"); }, }; The dispose() function flushes all buffered logs and cleans up resources. By passing it to ctx.waitUntil(), you ensure the worker stays alive long enough to finish writing logs, even after the response has been sent. Wrapping up Logging isn't glamorous, but it's one of those things that makes a huge difference when something goes wrong. The setup I've described here—categories for organization, structured data for queryability, contexts for request tracing—isn't complicated, but it's a significant step up from scattered console.log statements. LogTape isn't the only way to achieve this, but I've found it hits a nice sweet spot: powerful enough for production use, simple enough that you're not fighting the framework, and light enough that you don't feel guilty adding it to a library. If you want to dig deeper, the LogTape documentation covers advanced topics like custom filters, the “fingers crossed” pattern for buffering debug logs until an error occurs, and more sink options. The GitHub repository is also a good place to report issues or see what's coming next. Now go add some proper logging to that side project you've been meaning to clean up. Your future 2 AM self will thank you.