Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone
  • 0 Votes
    1 Posts
    3 Views
    Un browser funzionante creato con l’AI con 3 milioni di righe di codice: svolta o illusione?📌 Link all'articolo : https://www.redhotcyber.com/post/un-browser-funzionante-creato-con-lai-con-3-milioni-di-righe-di-codice-svolta-o-illusione/#redhotcyber #news #svilupposoftware #browser #gpt5 #intelligenzaartificiale #rust #javascript #webdevelopment #programmazione
  • 0 Votes
    1 Posts
    1 Views
    Learning zero-width joiners like#javascript #unicode
  • 0 Votes
    4 Posts
    9 Views
    @celesteh@hachyderm.io @jnkrtech@treehouse.systems I think it's partly because JavaScript is more commonly used for building consumer products compared to other languages, so there's a higher proportion of developers focused on shipping products rather than diving deep into infrastructure. When you're building a product, you naturally extract just enough to solve your immediate problem and move on. Languages like Rust tend to attract more developers interested in tooling and infrastructure work, where yak shaving is almost expected. The ecosystem reflects the priorities of its community.
  • 0 Votes
    2 Posts
    1 Views
    @ansuz The old-school way was to notify people at every opporty that W3Schools was full of mistakes and direct them to MDN.
  • 0 Votes
    1 Posts
    9 Views
    Consider Git's -C option: git -C /path/to/repo checkout <TAB> When you hit <kbd>Tab</kbd>, Git completes branch names from /path/to/repo, not your current directory. The completion is context-aware—it depends on the value of another option. Most CLI parsers can't do this. They treat each option in isolation, so completion for --branch has no way of knowing the --repo value. You end up with two unpleasant choices: either show completions for all possible branches across all repositories (useless), or give up on completion entirely for these options. Optique 0.10.0 introduces a dependency system that solves this problem while preserving full type safety. Static dependencies with or() Optique already handles certain kinds of dependent options via the or() combinator: import { flag, object, option, or, string } from "@optique/core"; const outputOptions = or( object({ json: flag("--json"), pretty: flag("--pretty"), }), object({ csv: flag("--csv"), delimiter: option("--delimiter", string()), }), ); TypeScript knows that if json is true, you'll have a pretty field, and if csv is true, you'll have a delimiter field. The parser enforces this at runtime, and shell completion will suggest --pretty only when --json is present. This works well when the valid combinations are known at definition time. But it can't handle cases where valid values depend on runtime input—like branch names that vary by repository. Runtime dependencies Common scenarios include: A deployment CLI where --environment affects which services are available A database tool where --connection affects which tables can be completed A cloud CLI where --project affects which resources are shown In each case, you can't know the valid values until you know what the user typed for the dependency option. Optique 0.10.0 introduces dependency() and derive() to handle exactly this. The dependency system The core idea is simple: mark one option as a dependency source, then create derived parsers that use its value. import { choice, dependency, message, object, option, string, } from "@optique/core"; function getRefsFromRepo(repoPath: string): string[] { // In real code, this would read from the Git repository return ["main", "develop", "feature/login"]; } // Mark as a dependency source const repoParser = dependency(string()); // Create a derived parser const refParser = repoParser.derive({ metavar: "REF", factory: (repoPath) => { const refs = getRefsFromRepo(repoPath); return choice(refs); }, defaultValue: () => ".", }); const parser = object({ repo: option("--repo", repoParser, { description: message`Path to the repository`, }), ref: option("--ref", refParser, { description: message`Git reference`, }), }); The factory function is where the dependency gets resolved. It receives the actual value the user provided for --repo and returns a parser that validates against refs from that specific repository. Under the hood, Optique uses a three-phase parsing strategy: Parse all options in a first pass, collecting dependency values Call factory functions with the collected values to create concrete parsers Re-parse derived options using those dynamically created parsers This means both validation and completion work correctly—if the user has already typed --repo /some/path, the --ref completion will show refs from that path. Repository-aware completion with @optique/git The @optique/git package provides async value parsers that read from Git repositories. Combined with the dependency system, you can build CLIs with repository-aware completion: import { command, dependency, message, object, option, string, } from "@optique/core"; import { gitBranch } from "@optique/git"; const repoParser = dependency(string()); const branchParser = repoParser.deriveAsync({ metavar: "BRANCH", factory: (repoPath) => gitBranch({ dir: repoPath }), defaultValue: () => ".", }); const checkout = command( "checkout", object({ repo: option("--repo", repoParser, { description: message`Path to the repository`, }), branch: option("--branch", branchParser, { description: message`Branch to checkout`, }), }), ); Now when you type my-cli checkout --repo /path/to/project --branch <TAB>, the completion will show branches from /path/to/project. The defaultValue of "." means that if --repo isn't specified, it falls back to the current directory. Multiple dependencies Sometimes a parser needs values from multiple options. The deriveFrom() function handles this: import { choice, dependency, deriveFrom, message, object, option, } from "@optique/core"; function getAvailableServices(env: string, region: string): string[] { return [`${env}-api-${region}`, `${env}-web-${region}`]; } const envParser = dependency(choice(["dev", "staging", "prod"] as const)); const regionParser = dependency(choice(["us-east", "eu-west"] as const)); const serviceParser = deriveFrom({ dependencies: [envParser, regionParser] as const, metavar: "SERVICE", factory: (env, region) => { const services = getAvailableServices(env, region); return choice(services); }, defaultValues: () => ["dev", "us-east"] as const, }); const parser = object({ env: option("--env", envParser, { description: message`Deployment environment`, }), region: option("--region", regionParser, { description: message`Cloud region`, }), service: option("--service", serviceParser, { description: message`Service to deploy`, }), }); The factory receives values in the same order as the dependency array. If some dependencies aren't provided, Optique uses the defaultValues. Async support Real-world dependency resolution often involves I/O—reading from Git repositories, querying APIs, accessing databases. Optique provides async variants for these cases: import { dependency, string } from "@optique/core"; import { gitBranch } from "@optique/git"; const repoParser = dependency(string()); const branchParser = repoParser.deriveAsync({ metavar: "BRANCH", factory: (repoPath) => gitBranch({ dir: repoPath }), defaultValue: () => ".", }); The @optique/git package uses isomorphic-git under the hood, so gitBranch(), gitTag(), and gitRef() all work in both Node.js and Deno. There's also deriveSync() for when you need to be explicit about synchronous behavior, and deriveFromAsync() for multiple async dependencies. Wrapping up The dependency system lets you build CLIs where options are aware of each other—not just for validation, but for shell completion too. You get type safety throughout: TypeScript knows the relationship between your dependency sources and derived parsers, and invalid combinations are caught at compile time. This is particularly useful for tools that interact with external systems where the set of valid values isn't known until runtime. Git repositories, cloud providers, databases, container registries—anywhere the completion choices depend on context the user has already provided. This feature will be available in Optique 0.10.0. To try the pre-release: deno add jsr:@optique/core@0.10.0-dev.311 Or with npm: npm install @optique/core@0.10.0-dev.311 See the documentation for more details.
  • 0 Votes
    4 Posts
    6 Views
    @NovemDecimal the alt-text is ok. No worries.
  • 0 Votes
    1 Posts
    5 Views
    We've all been there. You start a quick TypeScript CLI with process.argv.slice(2), add a couple of options, and before you know it you're drowning in if/else blocks and parseInt calls. It works, until it doesn't. In this guide, we'll move from manual argument parsing to a fully type-safe CLI with subcommands, mutually exclusive options, and shell completion. The naïve approach: parsing process.argv Let's start with the most basic approach. Say we want a greeting program that takes a name and optionally repeats the greeting: // greet.ts const args = process.argv.slice(2); let name: string | undefined; let count = 1; for (let i = 0; i < args.length; i++) { if (args[i] === "--name" || args[i] === "-n") { name = args[++i]; } else if (args[i] === "--count" || args[i] === "-c") { count = parseInt(args[++i], 10); } } if (!name) { console.error("Error: --name is required"); process.exit(1); } for (let i = 0; i < count; i++) { console.log(`Hello, ${name}!`); } Run node greet.js --name Alice --count 3 and you'll get three greetings. But this approach is fragile. count could be NaN if someone passes --count foo, and we'd silently proceed. There's no help text. If someone passes --name without a value, we'd read the next option as the name. And the boilerplate grows fast with each new option. The traditional libraries You've probably heard of Commander.js and Yargs. They've been around for years and solve the basic problems: // With Commander.js import { program } from "commander"; program .requiredOption("-n, --name <n>", "Name to greet") .option("-c, --count <number>", "Number of times to greet", "1") .parse(); const opts = program.opts(); These libraries handle help text, option parsing, and basic validation. But they were designed before TypeScript became mainstream, and the type safety is bolted on rather than built in. The real problem shows up when you need mutually exclusive options. Say your CLI works either in "server mode" (with --port and --host) or "client mode" (with --url). With these libraries, you end up with a config object where all options are potentially present, and you're left writing runtime checks to ensure the user didn't mix incompatible flags. TypeScript can't help you because the types don't reflect the actual constraints. Enter Optique Optique takes a different approach. Instead of configuring options declaratively, you build parsers by composing smaller parsers together. The types flow naturally from this composition, so TypeScript always knows exactly what shape your parsed result will have. Optique works across JavaScript runtimes: Node.js, Deno, and Bun are all supported. The core parsing logic has no runtime-specific dependencies, so you can even use it in browsers if you need to parse CLI-like arguments in a web context. Let's rebuild our greeting program: import { object } from "@optique/core/constructs"; import { option } from "@optique/core/primitives"; import { integer, string } from "@optique/core/valueparser"; import { withDefault } from "@optique/core/modifiers"; import { run } from "@optique/run"; const parser = object({ name: option("-n", "--name", string()), count: withDefault(option("-c", "--count", integer({ min: 1 })), 1), }); const config = run(parser); // config is typed as { name: string; count: number } for (let i = 0; i < config.count; i++) { console.log(`Hello, ${config.name}!`); } Types are inferred automatically. config.name is string, not string | undefined. config.count is number, guaranteed to be at least 1. Validation is built in: integer({ min: 1 }) rejects non-integers and values below 1 with clear error messages. Help text is generated automatically, and the run() function handles errors and exits with appropriate codes. Install it with your package manager of choice: npm add @optique/core @optique/run # or: pnpm add, yarn add, bun add, deno add jsr:@optique/core jsr:@optique/run Building up: a file converter Let's build something more realistic: a file converter that reads from an input file, converts to a specified format, and writes to an output file. import { object } from "@optique/core/constructs"; import { optional, withDefault } from "@optique/core/modifiers"; import { argument, option } from "@optique/core/primitives"; import { choice, string } from "@optique/core/valueparser"; import { run } from "@optique/run"; const parser = object({ input: argument(string({ metavar: "INPUT" })), output: option("-o", "--output", string({ metavar: "FILE" })), format: withDefault( option("-f", "--format", choice(["json", "yaml", "toml"])), "json" ), pretty: option("-p", "--pretty"), verbose: option("-v", "--verbose"), }); const config = run(parser, { help: "both", version: { mode: "both", value: "1.0.0" }, }); // config.input: string // config.output: string // config.format: "json" | "yaml" | "toml" // config.pretty: boolean // config.verbose: boolean The type of config.format isn't just string. It's the union "json" | "yaml" | "toml". TypeScript will catch typos like config.format === "josn" at compile time. The choice() parser is useful for any option with a fixed set of valid values: log levels, output formats, environment names, and so on. You get both runtime validation (invalid values are rejected with helpful error messages) and compile-time checking (TypeScript knows the exact set of possible values). Mutually exclusive options Now let's tackle the case that trips up most CLI libraries: mutually exclusive options. Say our tool can either run as a server or connect as a client, but not both: import { object, or } from "@optique/core/constructs"; import { withDefault } from "@optique/core/modifiers"; import { argument, constant, option } from "@optique/core/primitives"; import { integer, string, url } from "@optique/core/valueparser"; import { run } from "@optique/run"; const parser = or( // Server mode object({ mode: constant("server"), port: option("-p", "--port", integer({ min: 1, max: 65535 })), host: withDefault(option("-h", "--host", string()), "0.0.0.0"), }), // Client mode object({ mode: constant("client"), url: argument(url()), }), ); const config = run(parser); The or() combinator tries each alternative in order. The first one that successfully parses wins. The constant() parser adds a literal value to the result without consuming any input, which serves as a discriminator. TypeScript infers a discriminated union: type Config = | { mode: "server"; port: number; host: string } | { mode: "client"; url: URL }; Now you can write type-safe code that handles each mode: if (config.mode === "server") { console.log(`Starting server on ${config.host}:${config.port}`); } else { console.log(`Connecting to ${config.url.hostname}`); } Try accessing config.url in the server branch. TypeScript won't let you. The compiler knows that when mode is "server", only port and host exist. This is the key difference from configuration-based libraries. With Commander or Yargs, you'd get a type like { port?: number; host?: string; url?: string } and have to check at runtime which combination of fields is actually present. With Optique, the types match the actual constraints of your CLI. Subcommands For larger tools, you'll want subcommands. Optique handles this with the command() parser: import { object, or } from "@optique/core/constructs"; import { optional } from "@optique/core/modifiers"; import { argument, command, constant, option } from "@optique/core/primitives"; import { string } from "@optique/core/valueparser"; import { run } from "@optique/run"; const parser = or( command("add", object({ action: constant("add"), key: argument(string({ metavar: "KEY" })), value: argument(string({ metavar: "VALUE" })), })), command("remove", object({ action: constant("remove"), key: argument(string({ metavar: "KEY" })), })), command("list", object({ action: constant("list"), pattern: optional(option("-p", "--pattern", string())), })), ); const result = run(parser, { help: "both" }); switch (result.action) { case "add": console.log(`Adding ${result.key}=${result.value}`); break; case "remove": console.log(`Removing ${result.key}`); break; case "list": console.log(`Listing${result.pattern ? ` (filter: ${result.pattern})` : ""}`); break; } Each subcommand gets its own help text. Run myapp add --help and you'll see only the options relevant to add. Run myapp --help and you'll see a summary of all available commands. The pattern here is the same as mutually exclusive options: or() to combine alternatives, constant() to add a discriminator. This consistency is one of Optique's strengths. Once you understand the basic combinators, you can build arbitrarily complex CLI structures by composing them. Shell completion Optique has built-in shell completion for Bash, zsh, fish, PowerShell, and Nushell. Enable it by passing completion: "both" to run(): const config = run(parser, { help: "both", version: { mode: "both", value: "1.0.0" }, completion: "both", }); Users can then generate completion scripts: $ myapp --completion bash >> ~/.bashrc $ myapp --completion zsh >> ~/.zshrc $ myapp --completion fish > ~/.config/fish/completions/myapp.fish The completions are context-aware. They know about your subcommands, option values, and choice() alternatives. Type myapp --format <TAB> and you'll see json, yaml, toml as suggestions. Type myapp a<TAB> and it'll complete to myapp add. Completion support is often an afterthought in CLI tools, but it makes a real difference in user experience. With Optique, you get it essentially for free. Integrating with validation libraries Already using Zod for validation in your project? The @optique/zod package lets you reuse those schemas as CLI value parsers: import { z } from "zod"; import { zod } from "@optique/zod"; import { option } from "@optique/core/primitives"; const email = option("--email", zod(z.string().email())); const port = option("--port", zod(z.coerce.number().int().min(1).max(65535))); Your existing validation logic just works. The Zod error messages are passed through to the user, so you get the same helpful feedback you're used to. Prefer Valibot? The @optique/valibot package works the same way: import * as v from "valibot"; import { valibot } from "@optique/valibot"; import { option } from "@optique/core/primitives"; const email = option("--email", valibot(v.pipe(v.string(), v.email()))); Valibot's bundle size is significantly smaller than Zod's (~10KB vs ~52KB), which can matter for CLI tools where startup time is noticeable. Tips A few things I've learned building CLIs with Optique: Start simple. Begin with object() and basic options. Add or() for mutually exclusive groups only when you need them. It's easy to over-engineer CLI parsers. Use descriptive metavars. Instead of string(), write string({ metavar: "FILE" }) or string({ metavar: "URL" }). The metavar appears in help text and error messages, so it's worth the extra few characters. Leverage withDefault(). It's better than making options optional and checking for undefined everywhere. Your code becomes cleaner when you can assume values are always present. Test your parser. Optique's core parsing functions work without process.argv, so you can unit test your parser logic: import { parse } from "@optique/core/parser"; const result = parse(parser, ["--name", "Alice", "--count", "3"]); if (result.success) { assert.equal(result.value.name, "Alice"); assert.equal(result.value.count, 3); } This is especially valuable for complex parsers with many edge cases. Going further We've covered the fundamentals, but Optique has more to offer: Async value parsers for validating against external sources, like checking if a Git branch exists or if a URL is reachable Path validation with path() for checking file existence, directory structure, and file extensions Custom value parsers for domain-specific types (though Zod/Valibot integration is usually easier) Reusable option groups with merge() for sharing common options across subcommands The @optique/temporal package for parsing dates and times using the Temporal API Check out the documentation for the full picture. The tutorial walks through the concepts in more depth, and the cookbook has patterns for common scenarios. That's it Building CLIs in TypeScript doesn't have to mean fighting with types or writing endless runtime validation. Optique lets you express constraints in a way that TypeScript actually understands, so the compiler catches mistakes before they reach production. The source is on GitHub, and packages are available on both npm and JSR. Questions or feedback? Find me on the fediverse or open an issue on the GitHub repo.
  • 0 Votes
    1 Posts
    7 Views
    It's 2 AM. Something is wrong in production. Users are complaining, but you're not sure what's happening—your only clues are a handful of console.log statements you sprinkled around during development. Half of them say things like “here” or “this works.” The other half dump entire objects that scroll off the screen. Good luck. We've all been there. And yet, setting up “proper” logging often feels like overkill. Traditional logging libraries like winston or Pino come with their own learning curves, configuration formats, and assumptions about how you'll deploy your app. If you're working with edge functions or trying to keep your bundle small, adding a logging library can feel like bringing a sledgehammer to hang a picture frame. I'm a fan of the “just enough” approach—more than raw console.log, but without the weight of a full-blown logging framework. We'll start from console.log(), understand its real limitations (not the exaggerated ones), and work toward a setup that's actually useful. I'll be using LogTape for the examples—it's a zero-dependency logging library that works across Node.js, Deno, Bun, and edge functions, and stays out of your way when you don't need it. Starting with console methods—and where they fall short The console object is JavaScript's great equalizer. It's built-in, it works everywhere, and it requires zero setup. You even get basic severity levels: console.debug(), console.info(), console.warn(), and console.error(). In browser DevTools and some terminal environments, these show up with different colors or icons. console.debug("Connecting to database..."); console.info("Server started on port 3000"); console.warn("Cache miss for user 123"); console.error("Failed to process payment"); For small scripts or quick debugging, this is perfectly fine. But once your application grows beyond a few files, the cracks start to show: No filtering without code changes. Want to hide debug messages in production? You'll need to wrap every console.debug() call in a conditional, or find-and-replace them all. There's no way to say “show me only warnings and above” at runtime. Everything goes to the console. What if you want to write logs to a file? Send errors to Sentry? Stream logs to CloudWatch? You'd have to replace every console.* call with something else—and hope you didn't miss any. No context about where logs come from. When your app has dozens of modules, a log message like “Connection failed” doesn't tell you much. Was it the database? The cache? A third-party API? You end up prefixing every message manually: console.error("[database] Connection failed"). No structured data. Modern log analysis tools work best with structured data (JSON). But console.log("User logged in", { userId: 123 }) just prints User logged in { userId: 123 } as a string—not very useful for querying later. Libraries pollute your logs. If you're using a library that logs with console.*, those messages show up whether you want them or not. And if you're writing a library, your users might not appreciate unsolicited log messages. What you actually need from a logging system Before diving into code, let's think about what would actually solve the problems above. Not a wish list of features, but the practical stuff that makes a difference when you're debugging at 2 AM or trying to understand why requests are slow. Log levels with filtering A logging system should let you categorize messages by severity—trace, debug, info, warning, error, fatal—and then filter them based on what you need. During development, you want to see everything. In production, maybe just warnings and above. The key is being able to change this without touching your code. Categories When your app grows beyond a single file, you need to know where logs are coming from. A good logging system lets you tag logs with categories like ["my-app", "database"] or ["my-app", "auth", "oauth"]. Even better, it lets you set different log levels for different categories—maybe you want debug logs from the database module but only warnings from everything else. Sinks (multiple output destinations) “Sink” is just a fancy word for “where logs go.” You might want logs to go to the console during development, to files in production, and to an external service like Sentry or CloudWatch for errors. A good logging system lets you configure multiple sinks and route different logs to different destinations. Structured logging Instead of logging strings, you log objects with properties. This makes logs machine-readable and queryable: // Instead of this: logger.info("User 123 logged in from 192.168.1.1"); // You do this: logger.info("User logged in", { userId: 123, ip: "192.168.1.1" }); Now you can search for all logs where userId === 123 or filter by IP address. Context for request tracing In a web server, you often want all logs from a single request to share a common identifier (like a request ID). This makes it possible to trace a request's journey through your entire system. Getting started with LogTape There are plenty of logging libraries out there. winston has been around forever and has a plugin for everything. Pino is fast and outputs JSON. bunyan, log4js, signale—the list goes on. So why LogTape? A few reasons stood out to me: Zero dependencies. Not “few dependencies”—actually zero. In an era where a single npm install can pull in hundreds of packages, this matters for security, bundle size, and not having to wonder why your lockfile just changed. Works everywhere. The same code runs on Node.js, Deno, Bun, browsers, and edge functions like Cloudflare Workers. No polyfills, no conditional imports, no “this feature only works on Node.” Doesn't force itself on users. If you're writing a library, you can add logging without your users ever knowing—unless they want to see the logs. This is a surprisingly rare feature. Let's set it up: npm add @logtape/logtape # npm pnpm add @logtape/logtape # pnpm yarn add @logtape/logtape # Yarn deno add jsr:@logtape/logtape # Deno bun add @logtape/logtape # Bun Configuration happens once, at your application's entry point: import { configure, getConsoleSink, getLogger } from "@logtape/logtape"; await configure({ sinks: { console: getConsoleSink(), // Where logs go }, loggers: [ { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // What to log ], }); // Now you can log from anywhere in your app: const logger = getLogger(["my-app", "server"]); logger.info`Server started on port 3000`; logger.debug`Request received: ${{ method: "GET", path: "/api/users" }}`; Notice a few things: Configuration is explicit. You decide where logs go (sinks) and which logs to show (lowestLevel). Categories are hierarchical. The logger ["my-app", "server"] inherits settings from ["my-app"]. Template literals work. You can use backticks for a natural logging syntax. Categories and filtering: Controlling log verbosity Here's a scenario: you're debugging a database issue. You want to see every query, every connection attempt, every retry. But you don't want to wade through thousands of HTTP request logs to find them. Categories let you solve this. Instead of one global log level, you can set different verbosity for different parts of your application. await configure({ sinks: { console: getConsoleSink(), }, loggers: [ { category: ["my-app"], lowestLevel: "info", sinks: ["console"] }, // Default: info and above { category: ["my-app", "database"], lowestLevel: "debug", sinks: ["console"] }, // DB module: show debug too ], }); Now when you log from different parts of your app: // In your database module: const dbLogger = getLogger(["my-app", "database"]); dbLogger.debug`Executing query: ${sql}`; // This shows up // In your HTTP module: const httpLogger = getLogger(["my-app", "http"]); httpLogger.debug`Received request`; // This is filtered out (below "info") httpLogger.info`GET /api/users 200`; // This shows up Controlling third-party library logs If you're using libraries that also use LogTape, you can control their logs separately: await configure({ sinks: { console: getConsoleSink() }, loggers: [ { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // Only show warnings and above from some-library { category: ["some-library"], lowestLevel: "warning", sinks: ["console"] }, ], }); The root logger Sometimes you want a catch-all configuration. The root logger (empty category []) catches everything: await configure({ sinks: { console: getConsoleSink() }, loggers: [ // Catch all logs at info level { category: [], lowestLevel: "info", sinks: ["console"] }, // But show debug for your app { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, ], }); Log levels and when to use them LogTape has six log levels. Choosing the right one isn't just about severity—it's about who needs to see the message and when. Level When to use it trace Very detailed diagnostic info. Loop iterations, function entry/exit. Usually only enabled when hunting a specific bug. debug Information useful during development. Variable values, state changes, flow control decisions. info Normal operational messages. “Server started,” “User logged in,” “Job completed.” warning Something unexpected happened, but the app can continue. Deprecated API usage, retry attempts, missing optional config. error Something failed. An operation couldn't complete, but the app is still running. fatal The app is about to crash or is in an unrecoverable state. const logger = getLogger(["my-app"]); logger.trace`Entering processUser function`; logger.debug`Processing user ${{ userId: 123 }}`; logger.info`User successfully created`; logger.warn`Rate limit approaching: ${980}/1000 requests`; logger.error`Failed to save user: ${error.message}`; logger.fatal`Database connection lost, shutting down`; A good rule of thumb: in production, you typically run at info or warning level. During development or when debugging, you drop down to debug or trace. Structured logging: Beyond plain text At some point, you'll want to search your logs. “Show me all errors from the payment service in the last hour.” “Find all requests from user 12345.” “What's the average response time for the /api/users endpoint?” If your logs are plain text strings, these queries are painful. You end up writing regexes, hoping the log format is consistent, and cursing past-you for not thinking ahead. Structured logging means attaching data to your logs as key-value pairs, not just embedding them in strings. This makes logs machine-readable and queryable. LogTape supports two syntaxes for this: Template literals (great for simple messages) const userId = 123; const action = "login"; logger.info`User ${userId} performed ${action}`; Message templates with properties (great for structured data) logger.info("User performed action", { userId: 123, action: "login", ip: "192.168.1.1", timestamp: new Date().toISOString(), }); You can reference properties in your message using placeholders: logger.info("User {userId} logged in from {ip}", { userId: 123, ip: "192.168.1.1", }); // Output: User 123 logged in from 192.168.1.1 Nested property access LogTape supports dot notation and array indexing in placeholders: logger.info("Order {order.id} placed by {order.customer.name}", { order: { id: "ORD-001", customer: { name: "Alice", email: "alice@example.com" }, }, }); logger.info("First item: {items[0].name}", { items: [{ name: "Widget", price: 9.99 }], }); Machine-readable output with JSON Lines For production, you often want logs as JSON (one object per line). LogTape has a built-in formatter for this: import { configure, getConsoleSink, jsonLinesFormatter } from "@logtape/logtape"; await configure({ sinks: { console: getConsoleSink({ formatter: jsonLinesFormatter }), }, loggers: [ { category: [], lowestLevel: "info", sinks: ["console"] }, ], }); Output: {"@timestamp":"2026-01-15T10:30:00.000Z","level":"INFO","message":"User logged in","logger":"my-app","properties":{"userId":123}} Sending logs to different destinations (sinks) So far we've been sending everything to the console. That's fine for development, but in production you'll likely want logs to go elsewhere—or to multiple places at once. Think about it: console output disappears when the process restarts. If your server crashes at 3 AM, you want those logs to be somewhere persistent. And when an error occurs, you might want it to show up in your error tracking service immediately, not just sit in a log file waiting for someone to grep through it. This is where sinks come in. A sink is just a function that receives log records and does something with them. LogTape comes with several built-in sinks, and creating your own is trivial. Console sink The simplest sink—outputs to the console: import { getConsoleSink } from "@logtape/logtape"; const consoleSink = getConsoleSink(); File sink For writing logs to files, install the @logtape/file package: npm add @logtape/file import { getFileSink, getRotatingFileSink } from "@logtape/file"; // Simple file sink const fileSink = getFileSink("app.log"); // Rotating file sink (rotates when file reaches 10MB, keeps 5 old files) const rotatingFileSink = getRotatingFileSink("app.log", { maxSize: 10 * 1024 * 1024, // 10MB maxFiles: 5, }); Why rotating files? Without rotation, your log file grows indefinitely until it fills up the disk. With rotation, old logs are automatically archived and eventually deleted, keeping disk usage under control. This is especially important for long-running servers. External services For production systems, you often want logs to go to specialized services that provide search, alerting, and visualization. LogTape has packages for popular services: // OpenTelemetry (for observability platforms like Jaeger, Honeycomb, Datadog) import { getOpenTelemetrySink } from "@logtape/otel"; // Sentry (for error tracking with stack traces and context) import { getSentrySink } from "@logtape/sentry"; // AWS CloudWatch Logs (for AWS-native log aggregation) import { getCloudWatchLogsSink } from "@logtape/cloudwatch-logs"; The OpenTelemetry sink is particularly useful if you're already using OpenTelemetry for tracing—your logs will automatically correlate with your traces, making debugging distributed systems much easier. Multiple sinks Here's where things get interesting. You can send different logs to different destinations based on their level or category: await configure({ sinks: { console: getConsoleSink(), file: getFileSink("app.log"), errors: getSentrySink(), }, loggers: [ { category: [], lowestLevel: "info", sinks: ["console", "file"] }, // Everything to console + file { category: [], lowestLevel: "error", sinks: ["errors"] }, // Errors also go to Sentry ], }); Notice that a log record can go to multiple sinks. An error log in this configuration goes to the console, the file, and Sentry. This lets you have comprehensive local logs while also getting immediate alerts for critical issues. Custom sinks Sometimes you need to send logs somewhere that doesn't have a pre-built sink. Maybe you have an internal logging service, or you want to send logs to a Slack channel, or store them in a database. A sink is just a function that takes a LogRecord. That's it: import type { Sink } from "@logtape/logtape"; const slackSink: Sink = (record) => { // Only send errors and fatals to Slack if (record.level === "error" || record.level === "fatal") { fetch("https://hooks.slack.com/services/YOUR/WEBHOOK/URL", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ text: `[${record.level.toUpperCase()}] ${record.message.join("")}`, }), }); } }; The simplicity of sink functions means you can integrate LogTape with virtually any logging backend in just a few lines of code. Request tracing with contexts Here's a scenario you've probably encountered: a user reports an error, you check the logs, and you find a sea of interleaved messages from dozens of concurrent requests. Which log lines belong to the user's request? Good luck figuring that out. This is where request tracing comes in. The idea is simple: assign a unique identifier to each request, and include that identifier in every log message produced while handling that request. Now you can filter your logs by request ID and see exactly what happened, in order, for that specific request. LogTape supports this through contexts—a way to attach properties to log messages without passing them around explicitly. Explicit context The simplest approach is to create a logger with attached properties using .with(): function handleRequest(req: Request) { const requestId = crypto.randomUUID(); const logger = getLogger(["my-app", "http"]).with({ requestId }); logger.info`Request received`; // Includes requestId automatically processRequest(req, logger); logger.info`Request completed`; // Also includes requestId } This works well when you're passing the logger around explicitly. But what about code that's deeper in your call stack? What about code in libraries that don't know about your logger instance? Implicit context This is where implicit contexts shine. Using withContext(), you can set properties that automatically appear in all log messages within a callback—even in nested function calls, async operations, and third-party libraries (as long as they use LogTape). First, enable implicit contexts in your configuration: import { configure, getConsoleSink } from "@logtape/logtape"; import { AsyncLocalStorage } from "node:async_hooks"; await configure({ sinks: { console: getConsoleSink() }, loggers: [ { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, ], contextLocalStorage: new AsyncLocalStorage(), }); Then use withContext() in your request handler: import { withContext, getLogger } from "@logtape/logtape"; function handleRequest(req: Request) { const requestId = crypto.randomUUID(); return withContext({ requestId }, async () => { // Every log message in this callback includes requestId—automatically const logger = getLogger(["my-app"]); logger.info`Processing request`; await validateInput(req); // Logs here include requestId await processBusinessLogic(req); // Logs here too await saveToDatabase(req); // And here logger.info`Request complete`; }); } The magic is that validateInput, processBusinessLogic, and saveToDatabase don't need to know anything about the request ID. They just call getLogger() and log normally, and the request ID appears in their logs automatically. This works even across async boundaries—the context follows the execution flow, not the call stack. This is incredibly powerful for debugging. When something goes wrong, you can search for the request ID and see every log message from every module that was involved in handling that request. Framework integrations Setting up request tracing manually can be tedious. LogTape has dedicated packages for popular frameworks that handle this automatically: // Express import { expressLogger } from "@logtape/express"; app.use(expressLogger()); // Fastify import { getLogTapeFastifyLogger } from "@logtape/fastify"; const app = Fastify({ loggerInstance: getLogTapeFastifyLogger() }); // Hono import { honoLogger } from "@logtape/hono"; app.use(honoLogger()); // Koa import { koaLogger } from "@logtape/koa"; app.use(koaLogger()); These middlewares automatically generate request IDs, set up implicit contexts, and log request/response information. You get comprehensive request logging with a single line of code. Using LogTape in libraries vs applications If you've ever used a library that spams your console with unwanted log messages, you know how annoying it can be. And if you've ever tried to add logging to your own library, you've faced a dilemma: should you use console.log() and annoy your users? Require them to install and configure a specific logging library? Or just... not log anything? LogTape solves this with its library-first design. Libraries can add as much logging as they want, and it costs their users nothing unless they explicitly opt in. If you're writing a library The rule is simple: use getLogger() to log, but never call configure(). Configuration is the application's responsibility, not the library's. // my-library/src/database.ts import { getLogger } from "@logtape/logtape"; const logger = getLogger(["my-library", "database"]); export function connect(url: string) { logger.debug`Connecting to ${url}`; // ... connection logic ... logger.info`Connected successfully`; } What happens when someone uses your library? If they haven't configured LogTape, nothing happens. The log calls are essentially no-ops—no output, no errors, no performance impact. Your library works exactly as if the logging code wasn't there. If they have configured LogTape, they get full control. They can see your library's debug logs if they're troubleshooting an issue, or silence them entirely if they're not interested. They decide, not you. This is fundamentally different from using console.log() in a library. With console.log(), your users have no choice—they see your logs whether they want to or not. With LogTape, you give them the power to decide. If you're writing an application You configure LogTape once in your entry point. This single configuration controls logging for your entire application, including any libraries that use LogTape: await configure({ sinks: { console: getConsoleSink() }, loggers: [ { category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // Your app: verbose { category: ["my-library"], lowestLevel: "warning", sinks: ["console"] }, // Library: quiet { category: ["noisy-library"], lowestLevel: "fatal", sinks: [] }, // That one library: silent ], }); This separation of concerns—libraries log, applications configure—makes for a much healthier ecosystem. Library authors can add detailed logging for debugging without worrying about annoying their users. Application developers can tune logging to their needs without digging through library code. Migrating from another logger? If your application already uses winston, Pino, or another logging library, you don't have to migrate everything at once. LogTape provides adapters that route LogTape logs to your existing logging setup: import { install } from "@logtape/adaptor-winston"; import winston from "winston"; install(winston.createLogger({ /* your existing config */ })); This is particularly useful when you want to use a library that uses LogTape, but you're not ready to switch your whole application over. The library's logs will flow through your existing winston (or Pino) configuration, and you can migrate gradually if you choose to. Production considerations Development and production have different needs. During development, you want verbose logs, pretty formatting, and immediate feedback. In production, you care about performance, reliability, and not leaking sensitive data. Here are some things to keep in mind. Non-blocking mode By default, logging is synchronous—when you call logger.info(), the message is written to the sink before the function returns. This is fine for development, but in a high-throughput production environment, the I/O overhead of writing every log message can add up. Non-blocking mode buffers log messages and writes them in the background: const consoleSink = getConsoleSink({ nonBlocking: true }); const fileSink = getFileSink("app.log", { nonBlocking: true }); The tradeoff is that logs might be slightly delayed, and if your process crashes, some buffered logs might be lost. But for most production workloads, the performance benefit is worth it. Sensitive data redaction Logs have a way of ending up in unexpected places—log aggregation services, debugging sessions, support tickets. If you're logging request data, user information, or API responses, you might accidentally expose sensitive information like passwords, API keys, or personal data. LogTape's @logtape/redaction package helps you catch these before they become a problem: import { redactByPattern, EMAIL_ADDRESS_PATTERN, CREDIT_CARD_NUMBER_PATTERN, type RedactionPattern, } from "@logtape/redaction"; import { defaultConsoleFormatter, configure, getConsoleSink } from "@logtape/logtape"; const BEARER_TOKEN_PATTERN: RedactionPattern = { pattern: /Bearer [A-Za-z0-9\-._~+\/]+=*/g, replacement: "[REDACTED]", }; const formatter = redactByPattern(defaultConsoleFormatter, [ EMAIL_ADDRESS_PATTERN, CREDIT_CARD_NUMBER_PATTERN, BEARER_TOKEN_PATTERN, ]); await configure({ sinks: { console: getConsoleSink({ formatter }), }, // ... }); With this configuration, email addresses, credit card numbers, and bearer tokens are automatically replaced with [REDACTED] in your log output. The @logtape/redaction package comes with built-in patterns for common sensitive data types, and you can define custom patterns for anything else. It's not foolproof—you should still be mindful of what you log—but it provides a safety net. See the redaction documentation for more patterns and field-based redaction. Edge functions and serverless Edge functions (Cloudflare Workers, Vercel Edge Functions, etc.) have a unique constraint: they can be terminated immediately after returning a response. If you have buffered logs that haven't been flushed yet, they'll be lost. The solution is to explicitly flush logs before returning: import { configure, dispose } from "@logtape/logtape"; export default { async fetch(request, env, ctx) { await configure({ /* ... */ }); // ... handle request ... ctx.waitUntil(dispose()); // Flush logs before worker terminates return new Response("OK"); }, }; The dispose() function flushes all buffered logs and cleans up resources. By passing it to ctx.waitUntil(), you ensure the worker stays alive long enough to finish writing logs, even after the response has been sent. Wrapping up Logging isn't glamorous, but it's one of those things that makes a huge difference when something goes wrong. The setup I've described here—categories for organization, structured data for queryability, contexts for request tracing—isn't complicated, but it's a significant step up from scattered console.log statements. LogTape isn't the only way to achieve this, but I've found it hits a nice sweet spot: powerful enough for production use, simple enough that you're not fighting the framework, and light enough that you don't feel guilty adding it to a library. If you want to dig deeper, the LogTape documentation covers advanced topics like custom filters, the “fingers crossed” pattern for buffering debug logs until an error occurs, and more sink options. The GitHub repository is also a good place to report issues or see what's coming next. Now go add some proper logging to that side project you've been meaning to clean up. Your future 2 AM self will thank you.
  • 0 Votes
    1 Posts
    3 Views
    So you need to send emails from your JavaScript application. Email remains one of the most essential features in web apps—welcome emails, password resets, notifications—but the ecosystem is fragmented. Nodemailer doesn't work on edge functions. Each provider has its own SDK. And if you're using Deno or Bun, good luck finding libraries that actually work. This guide covers how to send emails across modern JavaScript runtimes using Upyo, a cross-runtime email library. TL;DR for the impatient If you just want working code, here's the quickest path to sending an email: import { createMessage } from "@upyo/core"; import { SmtpTransport } from "@upyo/smtp"; const transport = new SmtpTransport({ host: "smtp.gmail.com", port: 465, secure: true, auth: { user: "your-email@gmail.com", pass: "your-app-password", // Not your regular password! }, }); const message = createMessage({ from: "your-email@gmail.com", to: "recipient@example.com", subject: "Hello from my app!", content: { text: "This is my first email." }, }); const receipt = await transport.send(message); if (receipt.successful) { console.log("Sent:", receipt.messageId); } else { console.log("Failed:", receipt.errorMessages); } Install with: npm add @upyo/core @upyo/smtp That's it. This exact code works on Node.js, Deno, and Bun. But if you want to understand what's happening and explore more powerful options, read on. Why Upyo? Cross-runtime: Works on Node.js, Deno, Bun, and edge functions with the same API Zero dependencies: Keeps your bundle small Provider independence: Switch between SMTP, Mailgun, Resend, SendGrid, or Amazon SES without changing your application code Type-safe: Full TypeScript support with discriminated unions for error handling Built for testing: Includes a mock transport for unit tests Part 1: Getting started with Gmail SMTP Let's start with the most accessible option: Gmail's SMTP server. It's free, requires no additional accounts, and works great for development and low-volume production use. Step 1: Generate a Gmail app password Gmail doesn't allow you to use your regular password for SMTP. You need to create an app-specific password: Go to your Google Account Navigate to Security → 2-Step Verification (enable it if you haven't) At the bottom, click App passwords Select Mail and your device, then click Generate Copy the 16-character password Step 2: Install dependencies Choose your runtime and package manager: Node.js npm add @upyo/core @upyo/smtp # or: pnpm add @upyo/core @upyo/smtp # or: yarn add @upyo/core @upyo/smtp Deno deno add jsr:@upyo/core jsr:@upyo/smtp Bun bun add @upyo/core @upyo/smtp The same code works across all three runtimes—that's the beauty of Upyo. Step 3: Send your first email import { createMessage } from "@upyo/core"; import { SmtpTransport } from "@upyo/smtp"; // Create the transport (reuse this for multiple emails) const transport = new SmtpTransport({ host: "smtp.gmail.com", port: 465, secure: true, auth: { user: "your-email@gmail.com", pass: "abcd efgh ijkl mnop", // Your app password }, }); // Create and send a message const message = createMessage({ from: "your-email@gmail.com", to: "recipient@example.com", subject: "Welcome to my app!", content: { text: "Thanks for signing up. We're excited to have you!", html: "<h1>Welcome!</h1><p>Thanks for signing up. We're excited to have you!</p>", }, }); const receipt = await transport.send(message); if (receipt.successful) { console.log("Email sent successfully! Message ID:", receipt.messageId); } else { console.error("Failed to send email:", receipt.errorMessages.join(", ")); } // Don't forget to close connections when done await transport.closeAllConnections(); Let me highlight a few important details: secure: true with port 465: This establishes a TLS-encrypted connection from the start. Gmail requires encryption, so this combination is essential. Separate text and html content: Always provide both. Some email clients don't render HTML, and spam filters look more favorably on emails with plain text alternatives. The receipt pattern: Upyo uses discriminated unions for type-safe error handling. When receipt.successful is true, you get messageId. When it's false, you get errorMessages. This makes it impossible to forget error handling. Closing connections: SMTP maintains persistent TCP connections. Always close them when you're done, or use await using (shown next) to handle this automatically. Pro tip: automatic resource cleanup with await using Managing resources manually is error-prone—what if an exception occurs before closeAllConnections() is called? Modern JavaScript (ES2024) solves this with explicit resource management. import { createMessage } from "@upyo/core"; import { SmtpTransport } from "@upyo/smtp"; // Transport is automatically disposed when it goes out of scope await using transport = new SmtpTransport({ host: "smtp.gmail.com", port: 465, secure: true, auth: { user: "your-email@gmail.com", pass: "your-app-password", }, }); const message = createMessage({ from: "your-email@gmail.com", to: "recipient@example.com", subject: "Hello!", content: { text: "This email was sent with automatic cleanup!" }, }); await transport.send(message); // No need to call `closeAllConnections()` - it happens automatically! The await using keyword tells JavaScript to call the transport's cleanup method when execution leaves this scope—even if an error is thrown. This pattern is similar to Python's with statement or C#'s using block. It's supported in Node.js 22+, Deno, and Bun. What if your environment doesn't support await using? For older Node.js versions or environments without ES2024 support, use try/finally to ensure cleanup: const transport = new SmtpTransport({ host: "smtp.gmail.com", port: 465, secure: true, auth: { user: "your-email@gmail.com", pass: "your-app-password" }, }); try { await transport.send(message); } finally { await transport.closeAllConnections(); } This achieves the same result—cleanup happens whether the send succeeds or throws an error. Part 2: Adding attachments and rich content Real-world emails often need more than plain text. HTML emails with inline images Inline images appear directly in the email body rather than as downloadable attachments. The trick is to reference them using a Content-ID (CID) URL scheme. import { createMessage } from "@upyo/core"; import { SmtpTransport } from "@upyo/smtp"; import { readFile } from "node:fs/promises"; await using transport = new SmtpTransport({ host: "smtp.gmail.com", port: 465, secure: true, auth: { user: "your-email@gmail.com", pass: "your-app-password" }, }); // Read your logo file const logoContent = await readFile("./assets/logo.png"); const message = createMessage({ from: "your-email@gmail.com", to: "customer@example.com", subject: "Your order confirmation", content: { html: ` <div style="font-family: sans-serif; max-width: 600px; margin: 0 auto;"> <img src="cid:company-logo" alt="Company Logo" style="width: 150px;"> <h1>Order Confirmed!</h1> <p>Thank you for your purchase. Your order #12345 has been confirmed.</p> </div> `, text: "Order Confirmed! Thank you for your purchase. Your order #12345 has been confirmed.", }, attachments: [ { filename: "logo.png", content: logoContent, contentType: "image/png", contentId: "company-logo", // Referenced as cid:company-logo in HTML inline: true, }, ], }); await transport.send(message); Key points about inline images: contentId: This is the identifier you use in the HTML's src="cid:..." attribute. It can be any unique string. inline: true: This tells the email client to display the image within the message body, not as a separate attachment. Always include alt text: Some email clients block images by default, so the alt text ensures your message is still understandable. File attachments For regular attachments that recipients can download, use the standard File API. This approach works across all JavaScript runtimes. import { createMessage } from "@upyo/core"; import { SmtpTransport } from "@upyo/smtp"; import { readFile } from "node:fs/promises"; await using transport = new SmtpTransport({ host: "smtp.gmail.com", port: 465, secure: true, auth: { user: "your-email@gmail.com", pass: "your-app-password" }, }); // Read files to attach const invoicePdf = await readFile("./invoices/invoice-2024-001.pdf"); const reportXlsx = await readFile("./reports/monthly-report.xlsx"); const message = createMessage({ from: "billing@yourcompany.com", to: "client@example.com", cc: "accounting@yourcompany.com", subject: "Invoice #2024-001", content: { text: "Please find your invoice and monthly report attached.", }, attachments: [ new File([invoicePdf], "invoice-2024-001.pdf", { type: "application/pdf" }), new File([reportXlsx], "monthly-report.xlsx", { type: "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", }), ], priority: "high", // Sets email priority headers }); await transport.send(message); A few notes on attachments: MIME types matter: Setting the correct type helps email clients display the right icon and open the file with the appropriate application. priority: "high": This sets the X-Priority header, which some email clients use to highlight important messages. Use it sparingly—overuse can trigger spam filters. Multiple recipients with different roles Email supports several recipient types, each with different visibility rules: import { createMessage } from "@upyo/core"; const message = createMessage({ from: { name: "Support Team", address: "support@yourcompany.com" }, to: [ "primary-recipient@example.com", { name: "John Smith", address: "john@example.com" }, ], cc: "manager@yourcompany.com", bcc: ["archive@yourcompany.com", "compliance@yourcompany.com"], replyTo: "no-reply@yourcompany.com", subject: "Your support ticket has been updated", content: { text: "We've responded to your ticket #5678." }, }); Understanding recipient types: to: Primary recipients. Everyone can see who else is in this field. cc (Carbon Copy): Secondary recipients. Visible to all recipients—use for people who should be informed but aren't the primary audience. bcc (Blind Carbon Copy): Hidden recipients. No one can see BCC addresses—useful for archiving or compliance without revealing internal processes. replyTo: Where replies should go. Useful when sending from a no-reply address but wanting responses to reach a real inbox. You can specify addresses as simple strings ("email@example.com") or as objects with name and address properties for display names. Part 3: Moving to production with email service providers Gmail SMTP is great for getting started, but for production applications, you'll want a dedicated email service provider. Here's why: Higher sending limits: Gmail caps you at ~500 emails/day for personal accounts Better deliverability: Dedicated services maintain sender reputation and handle bounces properly Analytics and tracking: See who opened your emails, clicked links, etc. Webhook notifications: Get real-time callbacks for delivery events No dependency on personal accounts: Production systems shouldn't rely on someone's Gmail The best part? With Upyo, switching providers requires minimal code changes—just swap the transport. Option A: Resend (modern and developer-friendly) Resend is a newer email service with an excellent developer experience. npm add @upyo/resend import { createMessage } from "@upyo/core"; import { ResendTransport } from "@upyo/resend"; const transport = new ResendTransport({ apiKey: process.env.RESEND_API_KEY!, }); const message = createMessage({ from: "hello@yourdomain.com", // Must be verified in Resend to: "user@example.com", subject: "Welcome aboard!", content: { text: "Thanks for joining us!", html: "<h1>Welcome!</h1><p>Thanks for joining us!</p>", }, tags: ["onboarding", "welcome"], // For analytics }); const receipt = await transport.send(message); if (receipt.successful) { console.log("Sent via Resend:", receipt.messageId); } Notice how similar this looks to the SMTP example? The only differences are the import and the transport configuration. Your message creation and sending logic stays exactly the same—that's Upyo's transport abstraction at work. Option B: SendGrid (enterprise-grade) SendGrid is a popular choice for high-volume senders, offering advanced analytics, template management, and a generous free tier. SendGrid is a popular choice for high-volume senders. npm add @upyo/sendgrid import { createMessage } from "@upyo/core"; import { SendGridTransport } from "@upyo/sendgrid"; const transport = new SendGridTransport({ apiKey: process.env.SENDGRID_API_KEY!, clickTracking: true, openTracking: true, }); const message = createMessage({ from: "notifications@yourdomain.com", to: "user@example.com", subject: "Your weekly digest", content: { html: "<h1>This Week's Highlights</h1><p>Here's what you missed...</p>", text: "This Week's Highlights\n\nHere's what you missed...", }, tags: ["digest", "weekly"], }); await transport.send(message); Option C: Mailgun (reliable workhorse) Mailgun offers robust infrastructure with strong EU support—important if you need GDPR-compliant data residency. npm add @upyo/mailgun import { createMessage } from "@upyo/core"; import { MailgunTransport } from "@upyo/mailgun"; const transport = new MailgunTransport({ apiKey: process.env.MAILGUN_API_KEY!, domain: "mg.yourdomain.com", region: "eu", // or "us" }); const message = createMessage({ from: "team@yourdomain.com", to: "user@example.com", subject: "Important update", content: { text: "We have some news to share..." }, }); await transport.send(message); Option D: Amazon SES (cost-effective at scale) Amazon SES is incredibly affordable—about $0.10 per 1,000 emails. If you're already in the AWS ecosystem, it integrates seamlessly with IAM, CloudWatch, and other services. npm add @upyo/ses import { createMessage } from "@upyo/core"; import { SesTransport } from "@upyo/ses"; const transport = new SesTransport({ authentication: { type: "credentials", accessKeyId: process.env.AWS_ACCESS_KEY_ID!, secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!, }, region: "us-east-1", configurationSetName: "my-config-set", // Optional: for tracking }); const message = createMessage({ from: "alerts@yourdomain.com", to: "admin@example.com", subject: "System alert", content: { text: "CPU usage exceeded 90%" }, priority: "high", }); await transport.send(message); Part 4: Sending emails from edge functions Here's where many email solutions fall short. Edge functions (Cloudflare Workers, Vercel Edge, Deno Deploy) run in a restricted environment—they can't open raw TCP connections, which means SMTP is not an option. You must use an HTTP-based transport like Resend, SendGrid, Mailgun, or Amazon SES. The good news? Your code barely changes. Cloudflare Workers example // src/index.ts import { createMessage } from "@upyo/core"; import { ResendTransport } from "@upyo/resend"; export default { async fetch(request: Request, env: Env): Promise<Response> { const transport = new ResendTransport({ apiKey: env.RESEND_API_KEY, }); const message = createMessage({ from: "noreply@yourdomain.com", to: "user@example.com", subject: "Request received", content: { text: "We got your request and are processing it." }, }); const receipt = await transport.send(message); if (receipt.successful) { return new Response(`Email sent: ${receipt.messageId}`); } else { return new Response(`Failed: ${receipt.errorMessages.join(", ")}`, { status: 500, }); } }, }; interface Env { RESEND_API_KEY: string; } Vercel Edge Functions example // app/api/send-email/route.ts import { createMessage } from "@upyo/core"; import { SendGridTransport } from "@upyo/sendgrid"; export const runtime = "edge"; export async function POST(request: Request) { const { to, subject, body } = await request.json(); const transport = new SendGridTransport({ apiKey: process.env.SENDGRID_API_KEY!, }); const message = createMessage({ from: "app@yourdomain.com", to, subject, content: { text: body }, }); const receipt = await transport.send(message); if (receipt.successful) { return Response.json({ success: true, messageId: receipt.messageId }); } else { return Response.json( { success: false, errors: receipt.errorMessages }, { status: 500 } ); } } Deno Deploy example // main.ts import { createMessage } from "jsr:@upyo/core"; import { MailgunTransport } from "jsr:@upyo/mailgun"; Deno.serve(async (request: Request) => { if (request.method !== "POST") { return new Response("Method not allowed", { status: 405 }); } const { to, subject, body } = await request.json(); const transport = new MailgunTransport({ apiKey: Deno.env.get("MAILGUN_API_KEY")!, domain: Deno.env.get("MAILGUN_DOMAIN")!, region: "us", }); const message = createMessage({ from: "noreply@yourdomain.com", to, subject, content: { text: body }, }); const receipt = await transport.send(message); if (receipt.successful) { return Response.json({ success: true, messageId: receipt.messageId }); } else { return Response.json( { success: false, errors: receipt.errorMessages }, { status: 500 } ); } }); Part 5: Improving deliverability with DKIM Ever wonder why some emails land in spam while others don't? Email authentication plays a huge role. DKIM (DomainKeys Identified Mail) is one of the key mechanisms—it lets you digitally sign your emails so recipients can verify they actually came from your domain and weren't tampered with in transit. Without DKIM: Your emails are more likely to be flagged as spam Recipients have no way to verify you're really who you claim to be Sophisticated phishing attacks can impersonate your domain Setting up DKIM with Upyo First, generate a DKIM key pair. You can use OpenSSL: # Generate a 2048-bit RSA private key openssl genrsa -out dkim-private.pem 2048 # Extract the public key openssl rsa -in dkim-private.pem -pubout -out dkim-public.pem Then configure your SMTP transport: import { createMessage } from "@upyo/core"; import { SmtpTransport } from "@upyo/smtp"; import { readFileSync } from "node:fs"; const transport = new SmtpTransport({ host: "smtp.example.com", port: 587, secure: false, auth: { user: "user@yourdomain.com", pass: "password", }, dkim: { signatures: [ { signingDomain: "yourdomain.com", selector: "mail", // Creates DNS record at mail._domainkey.yourdomain.com privateKey: readFileSync("./dkim-private.pem", "utf8"), algorithm: "rsa-sha256", // or "ed25519-sha256" for shorter keys }, ], }, }); The key configuration options: signingDomain: Must match your email's "From" domain selector: An arbitrary name that becomes part of your DNS record (e.g., mail creates a record at mail._domainkey.yourdomain.com) algorithm: RSA-SHA256 is widely supported; Ed25519-SHA256 offers shorter keys (see below) Adding the DNS record Add a TXT record to your domain's DNS: Name: mail._domainkey (or mail._domainkey.yourdomain.com depending on your DNS provider) Value: v=DKIM1; k=rsa; p=YOUR_PUBLIC_KEY_HERE Extract the public key value (remove headers, footers, and newlines from the .pem file): cat dkim-public.pem | grep -v "^-" | tr -d '\n' Using Ed25519 for shorter keys RSA-2048 keys are long—about 400 characters for the public key. This can be problematic because DNS TXT records have size limits, and some DNS providers struggle with long records. Ed25519 provides equivalent security with much shorter keys (around 44 characters). If your email infrastructure supports it, Ed25519 is the modern choice. # Generate Ed25519 key pair openssl genpkey -algorithm ed25519 -out dkim-ed25519-private.pem openssl pkey -in dkim-ed25519-private.pem -pubout -out dkim-ed25519-public.pem const transport = new SmtpTransport({ // ... other config dkim: { signatures: [ { signingDomain: "yourdomain.com", selector: "mail2025", privateKey: readFileSync("./dkim-ed25519-private.pem", "utf8"), algorithm: "ed25519-sha256", }, ], }, }); Part 6: Bulk email sending When you need to send emails to many recipients—newsletters, notifications, marketing campaigns—you have two approaches: The wrong way: looping with send() // ❌ Don't do this for bulk sending for (const subscriber of subscribers) { await transport.send(createMessage({ from: "newsletter@example.com", to: subscriber.email, subject: "Weekly update", content: { text: "..." }, })); } This works, but it's inefficient: Each send() call waits for the previous one to complete No automatic batching or optimization Harder to track overall progress The right way: using sendMany() The sendMany() method is designed for bulk operations: import { createMessage } from "@upyo/core"; import { ResendTransport } from "@upyo/resend"; const transport = new ResendTransport({ apiKey: process.env.RESEND_API_KEY!, }); const subscribers = [ { email: "alice@example.com", name: "Alice" }, { email: "bob@example.com", name: "Bob" }, { email: "charlie@example.com", name: "Charlie" }, // ... potentially thousands more ]; // Create personalized messages const messages = subscribers.map((subscriber) => createMessage({ from: "newsletter@yourdomain.com", to: subscriber.email, subject: "Your weekly digest", content: { html: `<h1>Hi ${subscriber.name}!</h1><p>Here's what's new this week...</p>`, text: `Hi ${subscriber.name}!\n\nHere's what's new this week...`, }, tags: ["newsletter", "weekly"], }) ); // Send all messages efficiently let successCount = 0; let failureCount = 0; for await (const receipt of transport.sendMany(messages)) { if (receipt.successful) { successCount++; } else { failureCount++; console.error("Failed:", receipt.errorMessages.join(", ")); } } console.log(`Sent: ${successCount}, Failed: ${failureCount}`); Why sendMany() is better: Automatic batching: Some transports (like Resend) combine multiple messages into a single API call Connection reuse: SMTP transport reuses connections from the pool Streaming results: You get receipts as they complete, not all at once Resilient: One failure doesn't stop the rest Progress tracking for large batches const totalMessages = messages.length; let processed = 0; for await (const receipt of transport.sendMany(messages)) { processed++; if (processed % 100 === 0) { console.log(`Progress: ${processed}/${totalMessages} (${Math.round((processed / totalMessages) * 100)}%)`); } if (!receipt.successful) { console.error(`Message ${processed} failed:`, receipt.errorMessages); } } console.log("Batch complete!"); When to use send() vs sendMany() Scenario Use Single transactional email (welcome, password reset) send() A few emails (under 10) send() in a loop is fine Newsletters, bulk notifications sendMany() Batch processing from a queue sendMany() Part 7: Testing without sending real emails Upyo includes a MockTransport for testing: No external dependencies: Tests run offline, in CI, anywhere Deterministic: No flaky tests due to network issues Fast: No HTTP requests or SMTP handshakes Inspectable: You can verify exactly what would have been sent Basic testing setup import { createMessage } from "@upyo/core"; import { MockTransport } from "@upyo/mock"; import assert from "node:assert"; import { describe, it, beforeEach } from "node:test"; describe("Email functionality", () => { let transport: MockTransport; beforeEach(() => { transport = new MockTransport(); }); it("should send welcome email after registration", async () => { // Your application code would call this const message = createMessage({ from: "welcome@yourapp.com", to: "newuser@example.com", subject: "Welcome to our app!", content: { text: "Thanks for signing up!" }, }); const receipt = await transport.send(message); // Assertions assert.strictEqual(receipt.successful, true); assert.strictEqual(transport.getSentMessagesCount(), 1); const sentMessage = transport.getLastSentMessage(); assert.strictEqual(sentMessage?.subject, "Welcome to our app!"); assert.strictEqual(sentMessage?.recipients[0].address, "newuser@example.com"); }); it("should handle email failures gracefully", async () => { // Simulate a failure transport.setNextResponse({ successful: false, errorMessages: ["Invalid recipient address"], }); const message = createMessage({ from: "test@yourapp.com", to: "invalid-email", subject: "Test", content: { text: "Test" }, }); const receipt = await transport.send(message); assert.strictEqual(receipt.successful, false); assert.ok(receipt.errorMessages.includes("Invalid recipient address")); }); }); The key testing methods: getSentMessagesCount(): How many emails were “sent” getLastSentMessage(): The most recent message getSentMessages(): All messages as an array setNextResponse(): Force the next send to succeed or fail with specific errors Simulating real-world conditions import { MockTransport } from "@upyo/mock"; // Simulate network delays const slowTransport = new MockTransport({ delay: 500, // 500ms delay per email }); // Simulate random failures (10% failure rate) const unreliableTransport = new MockTransport({ failureRate: 0.1, }); // Simulate variable latency const realisticTransport = new MockTransport({ randomDelayRange: { min: 100, max: 500 }, }); Testing async email workflows import { MockTransport } from "@upyo/mock"; const transport = new MockTransport(); // Start your async operation that sends emails startUserRegistration("newuser@example.com"); // Wait for the expected emails to be sent await transport.waitForMessageCount(2, 5000); // Wait for 2 emails, 5s timeout // Or wait for a specific email const welcomeEmail = await transport.waitForMessage( (msg) => msg.subject.includes("Welcome"), 3000 ); console.log("Welcome email was sent:", welcomeEmail.subject); Part 8: Provider failover with PoolTransport What happens if your email provider goes down? For mission-critical applications, you need redundancy. PoolTransport combines multiple providers with automatic failover—if one fails, it tries the next. import { PoolTransport } from "@upyo/pool"; import { ResendTransport } from "@upyo/resend"; import { SendGridTransport } from "@upyo/sendgrid"; import { MailgunTransport } from "@upyo/mailgun"; import { createMessage } from "@upyo/core"; // Create multiple transports const resend = new ResendTransport({ apiKey: process.env.RESEND_API_KEY! }); const sendgrid = new SendGridTransport({ apiKey: process.env.SENDGRID_API_KEY! }); const mailgun = new MailgunTransport({ apiKey: process.env.MAILGUN_API_KEY!, domain: "mg.yourdomain.com", }); // Combine them with priority-based failover const transport = new PoolTransport({ strategy: "priority", transports: [ { transport: resend, priority: 100 }, // Try first { transport: sendgrid, priority: 50 }, // Fallback { transport: mailgun, priority: 10 }, // Last resort ], maxRetries: 3, }); const message = createMessage({ from: "critical@yourdomain.com", to: "admin@example.com", subject: "Critical alert", content: { text: "This email will try multiple providers if needed." }, }); const receipt = await transport.send(message); // Automatically tries Resend first, then SendGrid, then Mailgun if others fail The priority values determine the order—higher numbers are tried first. If Resend fails (network error, rate limit, etc.), the pool automatically retries with SendGrid, then Mailgun. For more advanced routing strategies (weighted distribution, content-based routing), see the pool transport documentation. Part 9: Observability with OpenTelemetry In production, you'll want to track email metrics: send rates, failure rates, latency. Upyo integrates with OpenTelemetry: import { createOpenTelemetryTransport } from "@upyo/opentelemetry"; import { SmtpTransport } from "@upyo/smtp"; const baseTransport = new SmtpTransport({ host: "smtp.example.com", port: 587, auth: { user: "user", pass: "password" }, }); const transport = createOpenTelemetryTransport(baseTransport, { serviceName: "email-service", tracing: { enabled: true }, metrics: { enabled: true }, }); // Now all email operations generate traces and metrics automatically await transport.send(message); This gives you: Delivery success/failure rates Send operation latency histograms Error classification by type Distributed tracing for debugging See the OpenTelemetry documentation for details. Quick reference: choosing the right transport Scenario Recommended Transport Development/testing Gmail SMTP or MockTransport Small production app Resend or SendGrid High volume (100k+/month) Amazon SES Edge functions Resend, SendGrid, or Mailgun Self-hosted infrastructure SMTP with DKIM Mission-critical PoolTransport with failover EU data residency Mailgun (EU region) or self-hosted Wrapping up This guide covered the most popular transports, but Upyo also supports: JMAP: Modern email protocol (RFC 8620/8621) for JMAP-compatible servers like Fastmail and Stalwart Plunk: Developer-friendly email service with self-hosting option And you can always create a custom transport for any email service not yet supported. Resources 📚 Documentation 📦 npm packages 📦 JSR packages 🐙 GitHub repository Have questions or feedback? Feel free to open an issue. What's been your biggest pain point when sending emails from JavaScript? Let me know in the comments—I'm curious what challenges others have run into. Upyo (pronounced /oo-pyo/) comes from the Korean word 郵票, meaning “postage stamp.”
  • 0 Votes
    5 Posts
    11 Views
    @bgl@hackers.pub 흠, 생각해 보니 그렇네요. 근데 그렇게 가다 보면 LangGraph나 Mastra 같은 것에 가까워 지는 것 같기도 하고요…? 🤔
  • 0 Votes
    1 Posts
    3 Views
    Wired ElementsWired Elements, the perfect compation of rough.js A set of common UI elements with a hand-drawn, sketchy look. These can be used for wireframes, mockups, or just the fun hand-drawn look.The elements are drawn with enough randomness that no two renderings will be exactly the same — just like two separate hand-drawn shapes.https://monodes.com/predaelli/2025/12/28/wired-elements/#Javascript
  • 0 Votes
    1 Posts
    3 Views
    When I started building Fedify, an ActivityPub server framework, I ran into a problem that surprised me: I couldn't figure out how to add logging. Not because logging is hard—there are dozens of mature logging libraries for JavaScript. The problem was that they're primarily designed for applications, not for libraries that want to stay unobtrusive. I wrote about this a few months ago, and the response was modest—some interest, some skepticism, and quite a bit of debate about whether the post was AI-generated. I'll be honest: English isn't my first language, so I use LLMs to polish my writing. But the ideas and technical content are mine. Several readers wanted to see a real-world example rather than theory. The problem: existing loggers assume you're building an app Fedify helps developers build federated social applications using the ActivityPub protocol. If you've ever worked with federation, you know debugging can be painful. When an activity fails to deliver, you need to answer questions like: Did the HTTP request actually go out? Was the signature generated correctly? Did the remote server reject it? Why? Was there a problem parsing the response? These questions span multiple subsystems: HTTP handling, cryptographic signatures, JSON-LD processing, queue management, and more. Without good logging, debugging turns into guesswork. But here's the dilemma I faced as a library author: if I add verbose logging to help with debugging, I risk annoying users who don't want their console cluttered with Fedify's internal chatter. If I stay silent, users struggle to diagnose issues. I looked at the existing options. With winston or Pino, I would have to either: Configure a logger inside Fedify (imposing my choices on users), or Ask users to pass a logger instance to Fedify (adding boilerplate) There's also debug, which is designed for this use case. But it doesn't give you structured, level-based logs that ops teams expect—and it relies on environment variables, which some runtimes like Deno restrict by default for security reasons. None of these felt right. So I built LogTape—a logging library designed from the ground up for library authors. And Fedify became its first real user. The solution: hierarchical categories with zero default output The key insight was simple: a library should be able to log without producing any output unless the application developer explicitly enables it. Fedify uses LogTape's hierarchical category system to give users fine-grained control over what they see. Here's how the categories are organized: Category What it logs ["fedify"] Everything from the library ["fedify", "federation", "inbox"] Incoming activities ["fedify", "federation", "outbox"] Outgoing activities ["fedify", "federation", "http"] HTTP requests and responses ["fedify", "sig", "http"] HTTP Signature operations ["fedify", "sig", "ld"] Linked Data Signature operations ["fedify", "sig", "key"] Key generation and retrieval ["fedify", "runtime", "docloader"] JSON-LD document loading ["fedify", "webfinger", "lookup"] WebFinger resource lookups …and about a dozen more. Each category corresponds to a distinct subsystem. This means a user can configure logging like this: await configure({ sinks: { console: getConsoleSink() }, loggers: [ // Show errors from all of Fedify { category: "fedify", sinks: ["console"], lowestLevel: "error" }, // But show debug info for inbox processing specifically { category: ["fedify", "federation", "inbox"], sinks: ["console"], lowestLevel: "debug" }, ], }); When something goes wrong with incoming activities, they get detailed logs for that subsystem while keeping everything else quiet. No code changes required—just configuration. Request tracing with implicit contexts The hierarchical categories solved the filtering problem, but there was another challenge: correlating logs across async boundaries. In a federated system, a single user action might trigger a cascade of operations: fetch a remote actor, verify their signature, process the activity, fan out to followers, and so on. When something fails, you need to correlate all the log entries for that specific request. Fedify uses LogTape's implicit context feature to automatically tag every log entry with a requestId: await configure({ sinks: { file: getFileSink("fedify.jsonl", { formatter: jsonLinesFormatter }) }, loggers: [ { category: "fedify", sinks: ["file"], lowestLevel: "info" }, ], contextLocalStorage: new AsyncLocalStorage(), // Enables implicit contexts }); With this configuration, every log entry automatically includes a requestId property. When you need to debug a specific request, you can filter your logs: jq 'select(.properties.requestId == "abc-123")' fedify.jsonl And you'll see every log entry from that request—across all subsystems, all in order. No manual correlation needed. The requestId is derived from standard headers when available (X-Request-Id, Traceparent, etc.), so it integrates naturally with existing observability infrastructure. What users actually see So what does all this configuration actually mean for someone using Fedify? If a Fedify user doesn't configure LogTape at all, they see nothing. No warnings about missing configuration, no default output, and minimal performance overhead—the logging calls are essentially no-ops. For basic visibility, they can enable error-level logging for all of Fedify with three lines of configuration. When debugging a specific issue, they can enable debug-level logging for just the relevant subsystem. And if they're running in production with serious observability requirements, they can pipe structured JSON logs to their monitoring system with request correlation built in. The same library code supports all these scenarios—whether the user is running on Node.js, Deno, Bun, or edge functions, without extra polyfills or shims. The user decides what they need. Lessons learned Building Fedify with LogTape taught me a few things: Design your categories early. The hierarchical structure should reflect how users will actually want to filter logs. I organized Fedify's categories around subsystems that users might need to debug independently. Use structured logging. Properties like requestId, activityId, and actorId are far more useful than string interpolation when you need to analyze logs programmatically. Implicit contexts turned out to be more useful than I expected. Being able to correlate logs across async boundaries without passing context manually made debugging distributed operations much easier. When a user reports that activity delivery failed, I can give them a single jq command to extract everything relevant. Trust your users. Some library authors worry about exposing too much internal detail through logs. I've found the opposite—users appreciate being able to see what's happening when they need to. The key is making it opt-in. Try it yourself If you're building a library and struggling with the logging question—how much to log, how to give users control, how to avoid being noisy—I'd encourage you to look at how Fedify does it. The Fedify logging documentation explains everything in detail. And if you want to understand the philosophy behind LogTape's design, my earlier post covers that. LogTape isn't trying to replace winston or Pino for application developers who are happy with those tools. It fills a different gap: logging for libraries that want to stay out of the way until users need them. If that's what you're looking for, it might be a better fit than the usual app-centric loggers.
  • 0 Votes
    1 Posts
    0 Views
    NetSupport RAT: il malware invisibile che gli antivirus non possono fermare📌 Link all'articolo : https://www.redhotcyber.com/post/netsupport-rat-il-malware-invisibile-che-gli-antivirus-non-possono-fermare/#redhotcyber #news #cybersecurity #hacking #malware #ransomware #netSupportRAT #javascript
  • 0 Votes
    1 Posts
    7 Views
    If you've built CLI tools, you've written code like this: if (opts.reporter === "junit" && !opts.outputFile) { throw new Error("--output-file is required for junit reporter"); } if (opts.reporter === "html" && !opts.outputFile) { throw new Error("--output-file is required for html reporter"); } if (opts.reporter === "console" && opts.outputFile) { console.warn("--output-file is ignored for console reporter"); } A few months ago, I wrote Stop writing CLI validation. Parse it right the first time. about parsing individual option values correctly. But it didn't cover the relationships between options. In the code above, --output-file only makes sense when --reporter is junit or html. When it's console, the option shouldn't exist at all. We're using TypeScript. We have a powerful type system. And yet, here we are, writing runtime checks that the compiler can't help with. Every time we add a new reporter type, we need to remember to update these checks. Every time we refactor, we hope we didn't miss one. The state of TypeScript CLI parsers The old guard—Commander, yargs, minimist—were built before TypeScript became mainstream. They give you bags of strings and leave type safety as an exercise for the reader. But we've made progress. Modern TypeScript-first libraries like cmd-ts and Clipanion (the library powering Yarn Berry) take types seriously: // cmd-ts const app = command({ args: { reporter: option({ type: string, long: 'reporter' }), outputFile: option({ type: string, long: 'output-file' }), }, handler: (args) => { // args.reporter: string // args.outputFile: string }, }); // Clipanion class TestCommand extends Command { reporter = Option.String('--reporter'); outputFile = Option.String('--output-file'); } These libraries infer types for individual options. --port is a number. --verbose is a boolean. That's real progress. But here's what they can't do: express that --output-file is required when --reporter is junit, and forbidden when --reporter is console. The relationship between options isn't captured in the type system. So you end up writing validation code anyway: handler: (args) => { // Both cmd-ts and Clipanion need this if (args.reporter === "junit" && !args.outputFile) { throw new Error("--output-file required for junit"); } // args.outputFile is still string | undefined // TypeScript doesn't know it's definitely string when reporter is "junit" } Rust's clap and Python's Click have requires and conflicts_with attributes, but those are runtime checks too. They don't change the result type. If the parser configuration knows about option relationships, why doesn't that knowledge show up in the result type? Modeling relationships with conditional() Optique treats option relationships as a first-class concept. Here's the test reporter scenario: import { conditional, object } from "@optique/core/constructs"; import { option } from "@optique/core/primitives"; import { choice, string } from "@optique/core/valueparser"; import { run } from "@optique/run"; const parser = conditional( option("--reporter", choice(["console", "junit", "html"])), { console: object({}), junit: object({ outputFile: option("--output-file", string()), }), html: object({ outputFile: option("--output-file", string()), openBrowser: option("--open-browser"), }), } ); const [reporter, config] = run(parser); The conditional() combinator takes a discriminator option (--reporter) and a map of branches. Each branch defines what other options are valid for that discriminator value. TypeScript infers the result type automatically: type Result = | ["console", {}] | ["junit", { outputFile: string }] | ["html", { outputFile: string; openBrowser: boolean }]; When reporter is "junit", outputFile is string—not string | undefined. The relationship is encoded in the type. Now your business logic gets real type safety: const [reporter, config] = run(parser); switch (reporter) { case "console": runWithConsoleOutput(); break; case "junit": // TypeScript knows config.outputFile is string writeJUnitReport(config.outputFile); break; case "html": // TypeScript knows config.outputFile and config.openBrowser exist writeHtmlReport(config.outputFile); if (config.openBrowser) openInBrowser(config.outputFile); break; } No validation code. No runtime checks. If you add a new reporter type and forget to handle it in the switch, the compiler tells you. A more complex example: database connections Test reporters are a nice example, but let's try something with more variation. Database connection strings: myapp --db=sqlite --file=./data.db myapp --db=postgres --host=localhost --port=5432 --user=admin myapp --db=mysql --host=localhost --port=3306 --user=root --ssl Each database type needs completely different options: SQLite just needs a file path PostgreSQL needs host, port, user, and optionally password MySQL needs host, port, user, and has an SSL flag Here's how you model this: import { conditional, object } from "@optique/core/constructs"; import { withDefault, optional } from "@optique/core/modifiers"; import { option } from "@optique/core/primitives"; import { choice, string, integer } from "@optique/core/valueparser"; const dbParser = conditional( option("--db", choice(["sqlite", "postgres", "mysql"])), { sqlite: object({ file: option("--file", string()), }), postgres: object({ host: option("--host", string()), port: withDefault(option("--port", integer()), 5432), user: option("--user", string()), password: optional(option("--password", string())), }), mysql: object({ host: option("--host", string()), port: withDefault(option("--port", integer()), 3306), user: option("--user", string()), ssl: option("--ssl"), }), } ); The inferred type: type DbConfig = | ["sqlite", { file: string }] | ["postgres", { host: string; port: number; user: string; password?: string }] | ["mysql", { host: string; port: number; user: string; ssl: boolean }]; Notice the details: PostgreSQL defaults to port 5432, MySQL to 3306. PostgreSQL has an optional password, MySQL has an SSL flag. Each database type has exactly the options it needs—no more, no less. With this structure, writing dbConfig.ssl when the mode is sqlite isn't a runtime error—it's a compile-time impossibility. Try expressing this with requires_if attributes. You can't. The relationships are too rich. The pattern is everywhere Once you see it, you find this pattern in many CLI tools: Authentication modes: const authParser = conditional( option("--auth", choice(["none", "basic", "token", "oauth"])), { none: object({}), basic: object({ username: option("--username", string()), password: option("--password", string()), }), token: object({ token: option("--token", string()), }), oauth: object({ clientId: option("--client-id", string()), clientSecret: option("--client-secret", string()), tokenUrl: option("--token-url", url()), }), } ); Deployment targets, output formats, connection protocols—anywhere you have a mode selector that determines what other options are valid. Why conditional() exists Optique already has an or() combinator for mutually exclusive alternatives. Why do we need conditional()? The or() combinator distinguishes branches based on structure—which options are present. It works well for subcommands like git commit vs git push, where the arguments differ completely. But in the reporter example, the structure is identical: every branch has a --reporter flag. The difference lies in the flag's value, not its presence. // This won't work as intended const parser = or( object({ reporter: option("--reporter", choice(["console"])) }), object({ reporter: option("--reporter", choice(["junit", "html"])), outputFile: option("--output-file", string()) }), ); When you pass --reporter junit, or() tries to pick a branch based on what options are present. Both branches have --reporter, so it can't distinguish them structurally. conditional() solves this by reading the discriminator's value first, then selecting the appropriate branch. It bridges the gap between structural parsing and value-based decisions. The structure is the constraint Instead of parsing options into a loose type and then validating relationships, define a parser whose structure is the constraint. Traditional approach Optique approach Parse → Validate → Use Parse (with constraints) → Use Types and validation logic maintained separately Types reflect the constraints Mismatches found at runtime Mismatches found at compile time The parser definition becomes the single source of truth. Add a new reporter type? The parser definition changes, the inferred type changes, and the compiler shows you everywhere that needs updating. Try it If this resonates with a CLI you're building: Documentation Tutorial conditional() reference GitHub Next time you're about to write an if statement checking option relationships, ask: could the parser express this constraint instead? The structure of your parser is the constraint. You might not need that validation code at all.
  • 0 Votes
    1 Posts
    8 Views
    We're excited to announce Optique 0.8.0! This release introduces powerful new features for building sophisticated CLI applications: the conditional() combinator for discriminated union patterns, the passThrough() parser for wrapper tools, and the new @optique/logtape package for seamless logging configuration. Optique is a type-safe combinatorial CLI parser for TypeScript, providing a functional approach to building command-line interfaces with composable parsers and full type inference. New conditional parsing with conditional() Ever needed to enable different sets of options based on a discriminator value? The new conditional() combinator makes this pattern first-class. It creates discriminated unions where certain options only become valid when a specific discriminator value is selected. import { conditional, object } from "@optique/core/constructs"; import { option } from "@optique/core/primitives"; import { choice, string } from "@optique/core/valueparser"; const parser = conditional( option("--reporter", choice(["console", "junit", "html"])), { console: object({}), junit: object({ outputFile: option("--output-file", string()) }), html: object({ outputFile: option("--output-file", string()) }), } ); // Result type: ["console", {}] | ["junit", { outputFile: string }] | ... Key features: Explicit discriminator option determines which branch is selected Tuple result [discriminator, branchValue] for clear type narrowing Optional default branch for when discriminator is not provided Clear error messages indicating which options are required for each discriminator value The conditional() parser provides a more structured alternative to or() for discriminated union patterns. Use it when you have an explicit discriminator option that determines which set of options is valid. See the conditional() documentation for more details and examples. Pass-through options with passThrough() Building wrapper CLI tools that need to forward unrecognized options to an underlying tool? The new passThrough() parser enables legitimate wrapper/proxy patterns by capturing unknown options without validation errors. import { object } from "@optique/core/constructs"; import { option, passThrough } from "@optique/core/primitives"; const parser = object({ debug: option("--debug"), extra: passThrough(), }); // mycli --debug --foo=bar --baz=qux // → { debug: true, extra: ["--foo=bar", "--baz=qux"] } Key features: Three capture formats: "equalsOnly" (default, safest), "nextToken" (captures --opt val pairs), and "greedy" (captures all remaining tokens) Lowest priority (−10) ensures explicit parsers always match first Respects -- options terminator in "equalsOnly" and "nextToken" modes Works seamlessly with object(), subcommands, and other combinators This feature is designed for building Docker-like CLIs, build tool wrappers, or any tool that proxies commands to another process. See the passThrough() documentation for usage patterns and best practices. LogTape logging integration The new @optique/logtape package provides seamless integration with LogTape, enabling you to configure logging through command-line arguments with various parsing strategies. # Deno deno add --jsr @optique/logtape @logtape/logtape # npm npm add @optique/logtape @logtape/logtape Quick start with the loggingOptions() preset: import { loggingOptions, createLoggingConfig } from "@optique/logtape"; import { object } from "@optique/core/constructs"; import { parse } from "@optique/core/parser"; import { configure } from "@logtape/logtape"; const parser = object({ logging: loggingOptions({ level: "verbosity" }), }); const args = ["-vv", "--log-output=-"]; const result = parse(parser, args); if (result.success) { const config = await createLoggingConfig(result.value.logging); await configure(config); } The package offers multiple approaches to control log verbosity: verbosity() parser: The classic -v/-vv/-vvv pattern where each flag increases verbosity (no flags → "warning", -v → "info", -vv → "debug", -vvv → "trace") debug() parser: Simple --debug/-d flag that toggles between normal and debug levels logLevel() value parser: Explicit --log-level=debug option for direct level selection logOutput() parser: Log output destination with - for console or file path for file output See the LogTape integration documentation for complete examples and configuration options. Bug fix: negative integers now accepted Fixed an issue where the integer() value parser rejected negative integers when using type: "number". The regex pattern has been updated from /^\d+$/ to /^-?\d+$/ to correctly handle values like -42. Note that type: "bigint" already accepted negative integers, so this change brings consistency between the two types. Installation # Deno deno add jsr:@optique/core # npm npm add @optique/core # pnpm pnpm add @optique/core # Yarn yarn add @optique/core # Bun bun add @optique/core For the LogTape integration: # Deno deno add --jsr @optique/logtape @logtape/logtape # npm npm add @optique/logtape @logtape/logtape # pnpm pnpm add @optique/logtape @logtape/logtape # Yarn yarn add @optique/logtape @logtape/logtape # Bun bun add @optique/logtape @logtape/logtape Looking forward Optique 0.8.0 continues our focus on making CLI development more expressive and type-safe. The conditional() combinator brings discriminated union patterns to the forefront, passThrough() enables new wrapper tool use cases, and the LogTape integration makes logging configuration a breeze. As always, all new features maintain full backward compatibility—your existing parsers continue to work unchanged. We're grateful to the community for feedback and suggestions. If you have ideas for future improvements or encounter any issues, please let us know through GitHub Issues. For more information about Optique and its features, visit the documentation or check out the full changelog.
  • 0 Votes
    1 Posts
    8 Views
    CVSS 10.0 critical severity vulnerablility affecting server-side use of React.js, tracked as: CVE-2025-55182 in React.js CVE-2025-66478 specifically for the Next.js framework https://react2shell.com #infosec #javascript #react
  • 0 Votes
    1 Posts
    8 Views
    Google rilascia Chrome 143 con importanti patch di sicurezza📌 Link all'articolo : https://www.redhotcyber.com/post/google-rilascia-chrome-143-con-importanti-patch-di-sicurezza/#redhotcyber #news #googlechrome #chromesicurezza #vulnerabilita #javascript #cybersecurity #hacking #malware
  • Working with #JavaScript again.

    Uncategorized javascript
    1
    1
    0 Votes
    1 Posts
    4 Views
    Working with #JavaScript again.
  • 0 Votes
    4 Posts
    7 Views
    @oblomov When using `checked` or `hidden` I do like not to add any value.```<input type="checkbox" checked />```Here's an interesting take with link to the HTML spec:https://mastodon.social/@shinspiegel/115530101220632082
  • 0 Votes
    1 Posts
    8 Views
    Cose da vedere nel tempo aziendale: vecchi sistemi operativi emulati in #JavaScript https://www.pcjs.org/

Gli ultimi otto messaggi ricevuti dalla Federazione
Post suggeriti