Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

Building CLI apps with TypeScript in 2026

Uncategorized
1 1 2
  • We've all been there. You start a quick TypeScript CLI with process.argv.slice(2), add a couple of options, and before you know it you're drowning in if/else blocks and parseInt calls. It works, until it doesn't.

    In this guide, we'll move from manual argument parsing to a fully type-safe CLI with subcommands, mutually exclusive options, and shell completion.

    The naïve approach: parsing process.argv

    Let's start with the most basic approach. Say we want a greeting program that takes a name and optionally repeats the greeting:

    // greet.ts
    const args = process.argv.slice(2);
    
    let name: string | undefined;
    let count = 1;
    
    for (let i = 0; i < args.length; i++) {
      if (args[i] === "--name" || args[i] === "-n") {
        name = args[++i];
      } else if (args[i] === "--count" || args[i] === "-c") {
        count = parseInt(args[++i], 10);
      }
    }
    
    if (!name) {
      console.error("Error: --name is required");
      process.exit(1);
    }
    
    for (let i = 0; i < count; i++) {
      console.log(`Hello, ${name}!`);
    }
    

    Run node greet.js --name Alice --count 3 and you'll get three greetings.

    But this approach is fragile. count could be NaN if someone passes --count foo, and we'd silently proceed. There's no help text. If someone passes --name without a value, we'd read the next option as the name. And the boilerplate grows fast with each new option.

    The traditional libraries

    You've probably heard of Commander.js and Yargs. They've been around for years and solve the basic problems:

    // With Commander.js
    import { program } from "commander";
    
    program
      .requiredOption("-n, --name <n>", "Name to greet")
      .option("-c, --count <number>", "Number of times to greet", "1")
      .parse();
    
    const opts = program.opts();
    

    These libraries handle help text, option parsing, and basic validation. But they were designed before TypeScript became mainstream, and the type safety is bolted on rather than built in.

    The real problem shows up when you need mutually exclusive options. Say your CLI works either in "server mode" (with --port and --host) or "client mode" (with --url). With these libraries, you end up with a config object where all options are potentially present, and you're left writing runtime checks to ensure the user didn't mix incompatible flags. TypeScript can't help you because the types don't reflect the actual constraints.

    Enter Optique

    Optique takes a different approach. Instead of configuring options declaratively, you build parsers by composing smaller parsers together. The types flow naturally from this composition, so TypeScript always knows exactly what shape your parsed result will have.

    Optique works across JavaScript runtimes: Node.js, Deno, and Bun are all supported. The core parsing logic has no runtime-specific dependencies, so you can even use it in browsers if you need to parse CLI-like arguments in a web context.

    Let's rebuild our greeting program:

    import { object } from "@optique/core/constructs";
    import { option } from "@optique/core/primitives";
    import { integer, string } from "@optique/core/valueparser";
    import { withDefault } from "@optique/core/modifiers";
    import { run } from "@optique/run";
    
    const parser = object({
      name: option("-n", "--name", string()),
      count: withDefault(option("-c", "--count", integer({ min: 1 })), 1),
    });
    
    const config = run(parser);
    // config is typed as { name: string; count: number }
    
    for (let i = 0; i < config.count; i++) {
      console.log(`Hello, ${config.name}!`);
    }
    

    Types are inferred automatically. config.name is string, not string | undefined. config.count is number, guaranteed to be at least 1. Validation is built in: integer({ min: 1 }) rejects non-integers and values below 1 with clear error messages. Help text is generated automatically, and the run() function handles errors and exits with appropriate codes.

    Install it with your package manager of choice:

    npm add @optique/core @optique/run
    # or: pnpm add, yarn add, bun add, deno add jsr:@optique/core jsr:@optique/run
    

    Building up: a file converter

    Let's build something more realistic: a file converter that reads from an input file, converts to a specified format, and writes to an output file.

    import { object } from "@optique/core/constructs";
    import { optional, withDefault } from "@optique/core/modifiers";
    import { argument, option } from "@optique/core/primitives";
    import { choice, string } from "@optique/core/valueparser";
    import { run } from "@optique/run";
    
    const parser = object({
      input: argument(string({ metavar: "INPUT" })),
      output: option("-o", "--output", string({ metavar: "FILE" })),
      format: withDefault(
        option("-f", "--format", choice(["json", "yaml", "toml"])),
        "json"
      ),
      pretty: option("-p", "--pretty"),
      verbose: option("-v", "--verbose"),
    });
    
    const config = run(parser, {
      help: "both",
      version: { mode: "both", value: "1.0.0" },
    });
    
    // config.input: string
    // config.output: string
    // config.format: "json" | "yaml" | "toml"
    // config.pretty: boolean
    // config.verbose: boolean
    

    The type of config.format isn't just string. It's the union "json" | "yaml" | "toml". TypeScript will catch typos like config.format === "josn" at compile time.

    The choice() parser is useful for any option with a fixed set of valid values: log levels, output formats, environment names, and so on. You get both runtime validation (invalid values are rejected with helpful error messages) and compile-time checking (TypeScript knows the exact set of possible values).

    Mutually exclusive options

    Now let's tackle the case that trips up most CLI libraries: mutually exclusive options. Say our tool can either run as a server or connect as a client, but not both:

    import { object, or } from "@optique/core/constructs";
    import { withDefault } from "@optique/core/modifiers";
    import { argument, constant, option } from "@optique/core/primitives";
    import { integer, string, url } from "@optique/core/valueparser";
    import { run } from "@optique/run";
    
    const parser = or(
      // Server mode
      object({
        mode: constant("server"),
        port: option("-p", "--port", integer({ min: 1, max: 65535 })),
        host: withDefault(option("-h", "--host", string()), "0.0.0.0"),
      }),
      // Client mode
      object({
        mode: constant("client"),
        url: argument(url()),
      }),
    );
    
    const config = run(parser);
    

    The or() combinator tries each alternative in order. The first one that successfully parses wins. The constant() parser adds a literal value to the result without consuming any input, which serves as a discriminator.

    TypeScript infers a discriminated union:

    type Config =
      | { mode: "server"; port: number; host: string }
      | { mode: "client"; url: URL };
    

    Now you can write type-safe code that handles each mode:

    if (config.mode === "server") {
      console.log(`Starting server on ${config.host}:${config.port}`);
    } else {
      console.log(`Connecting to ${config.url.hostname}`);
    }
    

    Try accessing config.url in the server branch. TypeScript won't let you. The compiler knows that when mode is "server", only port and host exist.

    This is the key difference from configuration-based libraries. With Commander or Yargs, you'd get a type like { port?: number; host?: string; url?: string } and have to check at runtime which combination of fields is actually present. With Optique, the types match the actual constraints of your CLI.

    Subcommands

    For larger tools, you'll want subcommands. Optique handles this with the command() parser:

    import { object, or } from "@optique/core/constructs";
    import { optional } from "@optique/core/modifiers";
    import { argument, command, constant, option } from "@optique/core/primitives";
    import { string } from "@optique/core/valueparser";
    import { run } from "@optique/run";
    
    const parser = or(
      command("add", object({
        action: constant("add"),
        key: argument(string({ metavar: "KEY" })),
        value: argument(string({ metavar: "VALUE" })),
      })),
      command("remove", object({
        action: constant("remove"),
        key: argument(string({ metavar: "KEY" })),
      })),
      command("list", object({
        action: constant("list"),
        pattern: optional(option("-p", "--pattern", string())),
      })),
    );
    
    const result = run(parser, { help: "both" });
    
    switch (result.action) {
      case "add":
        console.log(`Adding ${result.key}=${result.value}`);
        break;
      case "remove":
        console.log(`Removing ${result.key}`);
        break;
      case "list":
        console.log(`Listing${result.pattern ? ` (filter: ${result.pattern})` : ""}`);
        break;
    }
    

    Each subcommand gets its own help text. Run myapp add --help and you'll see only the options relevant to add. Run myapp --help and you'll see a summary of all available commands.

    The pattern here is the same as mutually exclusive options: or() to combine alternatives, constant() to add a discriminator. This consistency is one of Optique's strengths. Once you understand the basic combinators, you can build arbitrarily complex CLI structures by composing them.

    Shell completion

    Optique has built-in shell completion for Bash, zsh, fish, PowerShell, and Nushell. Enable it by passing completion: "both" to run():

    const config = run(parser, {
      help: "both",
      version: { mode: "both", value: "1.0.0" },
      completion: "both",
    });
    

    Users can then generate completion scripts:

    $ myapp --completion bash >> ~/.bashrc
    $ myapp --completion zsh >> ~/.zshrc
    $ myapp --completion fish > ~/.config/fish/completions/myapp.fish
    

    The completions are context-aware. They know about your subcommands, option values, and choice() alternatives. Type myapp --format <TAB> and you'll see json, yaml, toml as suggestions. Type myapp a<TAB> and it'll complete to myapp add.

    Completion support is often an afterthought in CLI tools, but it makes a real difference in user experience. With Optique, you get it essentially for free.

    Integrating with validation libraries

    Already using Zod for validation in your project? The @optique/zod package lets you reuse those schemas as CLI value parsers:

    import { z } from "zod";
    import { zod } from "@optique/zod";
    import { option } from "@optique/core/primitives";
    
    const email = option("--email", zod(z.string().email()));
    const port = option("--port", zod(z.coerce.number().int().min(1).max(65535)));
    

    Your existing validation logic just works. The Zod error messages are passed through to the user, so you get the same helpful feedback you're used to.

    Prefer Valibot? The @optique/valibot package works the same way:

    import * as v from "valibot";
    import { valibot } from "@optique/valibot";
    import { option } from "@optique/core/primitives";
    
    const email = option("--email", valibot(v.pipe(v.string(), v.email())));
    

    Valibot's bundle size is significantly smaller than Zod's (~10KB vs ~52KB), which can matter for CLI tools where startup time is noticeable.

    Tips

    A few things I've learned building CLIs with Optique:

    Start simple. Begin with object() and basic options. Add or() for mutually exclusive groups only when you need them. It's easy to over-engineer CLI parsers.

    Use descriptive metavars. Instead of string(), write string({ metavar: "FILE" }) or string({ metavar: "URL" }). The metavar appears in help text and error messages, so it's worth the extra few characters.

    Leverage withDefault(). It's better than making options optional and checking for undefined everywhere. Your code becomes cleaner when you can assume values are always present.

    Test your parser. Optique's core parsing functions work without process.argv, so you can unit test your parser logic:

    import { parse } from "@optique/core/parser";
    
    const result = parse(parser, ["--name", "Alice", "--count", "3"]);
    if (result.success) {
      assert.equal(result.value.name, "Alice");
      assert.equal(result.value.count, 3);
    }
    

    This is especially valuable for complex parsers with many edge cases.

    Going further

    We've covered the fundamentals, but Optique has more to offer:

    • Async value parsers for validating against external sources, like checking if a Git branch exists or if a URL is reachable
    • Path validation with path() for checking file existence, directory structure, and file extensions
    • Custom value parsers for domain-specific types (though Zod/Valibot integration is usually easier)
    • Reusable option groups with merge() for sharing common options across subcommands
    • The @optique/temporal package for parsing dates and times using the Temporal API

    Check out the documentation for the full picture. The tutorial walks through the concepts in more depth, and the cookbook has patterns for common scenarios.

    That's it

    Building CLIs in TypeScript doesn't have to mean fighting with types or writing endless runtime validation. Optique lets you express constraints in a way that TypeScript actually understands, so the compiler catches mistakes before they reach production.

    The source is on GitHub, and packages are available on both npm and JSR.


    Questions or feedback? Find me on the fediverse or open an issue on the GitHub repo.

  • hongminhee@hollo.socialundefined hongminhee@hollo.social shared this topic on

Gli ultimi otto messaggi ricevuti dalla Federazione
Post suggeriti
  • 0 Votes
    1 Posts
    0 Views
    So you need to send emails from your JavaScript application. Email remains one of the most essential features in web apps—welcome emails, password resets, notifications—but the ecosystem is fragmented. Nodemailer doesn't work on edge functions. Each provider has its own SDK. And if you're using Deno or Bun, good luck finding libraries that actually work. This guide covers how to send emails across modern JavaScript runtimes using Upyo, a cross-runtime email library. TL;DR for the impatient If you just want working code, here's the quickest path to sending an email: import { createMessage } from "@upyo/core"; import { SmtpTransport } from "@upyo/smtp"; const transport = new SmtpTransport({ host: "smtp.gmail.com", port: 465, secure: true, auth: { user: "your-email@gmail.com", pass: "your-app-password", // Not your regular password! }, }); const message = createMessage({ from: "your-email@gmail.com", to: "recipient@example.com", subject: "Hello from my app!", content: { text: "This is my first email." }, }); const receipt = await transport.send(message); if (receipt.successful) { console.log("Sent:", receipt.messageId); } else { console.log("Failed:", receipt.errorMessages); } Install with: npm add @upyo/core @upyo/smtp That's it. This exact code works on Node.js, Deno, and Bun. But if you want to understand what's happening and explore more powerful options, read on. Why Upyo? Cross-runtime: Works on Node.js, Deno, Bun, and edge functions with the same API Zero dependencies: Keeps your bundle small Provider independence: Switch between SMTP, Mailgun, Resend, SendGrid, or Amazon SES without changing your application code Type-safe: Full TypeScript support with discriminated unions for error handling Built for testing: Includes a mock transport for unit tests Part 1: Getting started with Gmail SMTP Let's start with the most accessible option: Gmail's SMTP server. It's free, requires no additional accounts, and works great for development and low-volume production use. Step 1: Generate a Gmail app password Gmail doesn't allow you to use your regular password for SMTP. You need to create an app-specific password: Go to your Google Account Navigate to Security → 2-Step Verification (enable it if you haven't) At the bottom, click App passwords Select Mail and your device, then click Generate Copy the 16-character password Step 2: Install dependencies Choose your runtime and package manager: Node.js npm add @upyo/core @upyo/smtp # or: pnpm add @upyo/core @upyo/smtp # or: yarn add @upyo/core @upyo/smtp Deno deno add jsr:@upyo/core jsr:@upyo/smtp Bun bun add @upyo/core @upyo/smtp The same code works across all three runtimes—that's the beauty of Upyo. Step 3: Send your first email import { createMessage } from "@upyo/core"; import { SmtpTransport } from "@upyo/smtp"; // Create the transport (reuse this for multiple emails) const transport = new SmtpTransport({ host: "smtp.gmail.com", port: 465, secure: true, auth: { user: "your-email@gmail.com", pass: "abcd efgh ijkl mnop", // Your app password }, }); // Create and send a message const message = createMessage({ from: "your-email@gmail.com", to: "recipient@example.com", subject: "Welcome to my app!", content: { text: "Thanks for signing up. We're excited to have you!", html: "<h1>Welcome!</h1><p>Thanks for signing up. We're excited to have you!</p>", }, }); const receipt = await transport.send(message); if (receipt.successful) { console.log("Email sent successfully! Message ID:", receipt.messageId); } else { console.error("Failed to send email:", receipt.errorMessages.join(", ")); } // Don't forget to close connections when done await transport.closeAllConnections(); Let me highlight a few important details: secure: true with port 465: This establishes a TLS-encrypted connection from the start. Gmail requires encryption, so this combination is essential. Separate text and html content: Always provide both. Some email clients don't render HTML, and spam filters look more favorably on emails with plain text alternatives. The receipt pattern: Upyo uses discriminated unions for type-safe error handling. When receipt.successful is true, you get messageId. When it's false, you get errorMessages. This makes it impossible to forget error handling. Closing connections: SMTP maintains persistent TCP connections. Always close them when you're done, or use await using (shown next) to handle this automatically. Pro tip: automatic resource cleanup with await using Managing resources manually is error-prone—what if an exception occurs before closeAllConnections() is called? Modern JavaScript (ES2024) solves this with explicit resource management. import { createMessage } from "@upyo/core"; import { SmtpTransport } from "@upyo/smtp"; // Transport is automatically disposed when it goes out of scope await using transport = new SmtpTransport({ host: "smtp.gmail.com", port: 465, secure: true, auth: { user: "your-email@gmail.com", pass: "your-app-password", }, }); const message = createMessage({ from: "your-email@gmail.com", to: "recipient@example.com", subject: "Hello!", content: { text: "This email was sent with automatic cleanup!" }, }); await transport.send(message); // No need to call `closeAllConnections()` - it happens automatically! The await using keyword tells JavaScript to call the transport's cleanup method when execution leaves this scope—even if an error is thrown. This pattern is similar to Python's with statement or C#'s using block. It's supported in Node.js 22+, Deno, and Bun. What if your environment doesn't support await using? For older Node.js versions or environments without ES2024 support, use try/finally to ensure cleanup: const transport = new SmtpTransport({ host: "smtp.gmail.com", port: 465, secure: true, auth: { user: "your-email@gmail.com", pass: "your-app-password" }, }); try { await transport.send(message); } finally { await transport.closeAllConnections(); } This achieves the same result—cleanup happens whether the send succeeds or throws an error. Part 2: Adding attachments and rich content Real-world emails often need more than plain text. HTML emails with inline images Inline images appear directly in the email body rather than as downloadable attachments. The trick is to reference them using a Content-ID (CID) URL scheme. import { createMessage } from "@upyo/core"; import { SmtpTransport } from "@upyo/smtp"; import { readFile } from "node:fs/promises"; await using transport = new SmtpTransport({ host: "smtp.gmail.com", port: 465, secure: true, auth: { user: "your-email@gmail.com", pass: "your-app-password" }, }); // Read your logo file const logoContent = await readFile("./assets/logo.png"); const message = createMessage({ from: "your-email@gmail.com", to: "customer@example.com", subject: "Your order confirmation", content: { html: ` <div style="font-family: sans-serif; max-width: 600px; margin: 0 auto;"> <img src="cid:company-logo" alt="Company Logo" style="width: 150px;"> <h1>Order Confirmed!</h1> <p>Thank you for your purchase. Your order #12345 has been confirmed.</p> </div> `, text: "Order Confirmed! Thank you for your purchase. Your order #12345 has been confirmed.", }, attachments: [ { filename: "logo.png", content: logoContent, contentType: "image/png", contentId: "company-logo", // Referenced as cid:company-logo in HTML inline: true, }, ], }); await transport.send(message); Key points about inline images: contentId: This is the identifier you use in the HTML's src="cid:..." attribute. It can be any unique string. inline: true: This tells the email client to display the image within the message body, not as a separate attachment. Always include alt text: Some email clients block images by default, so the alt text ensures your message is still understandable. File attachments For regular attachments that recipients can download, use the standard File API. This approach works across all JavaScript runtimes. import { createMessage } from "@upyo/core"; import { SmtpTransport } from "@upyo/smtp"; import { readFile } from "node:fs/promises"; await using transport = new SmtpTransport({ host: "smtp.gmail.com", port: 465, secure: true, auth: { user: "your-email@gmail.com", pass: "your-app-password" }, }); // Read files to attach const invoicePdf = await readFile("./invoices/invoice-2024-001.pdf"); const reportXlsx = await readFile("./reports/monthly-report.xlsx"); const message = createMessage({ from: "billing@yourcompany.com", to: "client@example.com", cc: "accounting@yourcompany.com", subject: "Invoice #2024-001", content: { text: "Please find your invoice and monthly report attached.", }, attachments: [ new File([invoicePdf], "invoice-2024-001.pdf", { type: "application/pdf" }), new File([reportXlsx], "monthly-report.xlsx", { type: "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", }), ], priority: "high", // Sets email priority headers }); await transport.send(message); A few notes on attachments: MIME types matter: Setting the correct type helps email clients display the right icon and open the file with the appropriate application. priority: "high": This sets the X-Priority header, which some email clients use to highlight important messages. Use it sparingly—overuse can trigger spam filters. Multiple recipients with different roles Email supports several recipient types, each with different visibility rules: import { createMessage } from "@upyo/core"; const message = createMessage({ from: { name: "Support Team", address: "support@yourcompany.com" }, to: [ "primary-recipient@example.com", { name: "John Smith", address: "john@example.com" }, ], cc: "manager@yourcompany.com", bcc: ["archive@yourcompany.com", "compliance@yourcompany.com"], replyTo: "no-reply@yourcompany.com", subject: "Your support ticket has been updated", content: { text: "We've responded to your ticket #5678." }, }); Understanding recipient types: to: Primary recipients. Everyone can see who else is in this field. cc (Carbon Copy): Secondary recipients. Visible to all recipients—use for people who should be informed but aren't the primary audience. bcc (Blind Carbon Copy): Hidden recipients. No one can see BCC addresses—useful for archiving or compliance without revealing internal processes. replyTo: Where replies should go. Useful when sending from a no-reply address but wanting responses to reach a real inbox. You can specify addresses as simple strings ("email@example.com") or as objects with name and address properties for display names. Part 3: Moving to production with email service providers Gmail SMTP is great for getting started, but for production applications, you'll want a dedicated email service provider. Here's why: Higher sending limits: Gmail caps you at ~500 emails/day for personal accounts Better deliverability: Dedicated services maintain sender reputation and handle bounces properly Analytics and tracking: See who opened your emails, clicked links, etc. Webhook notifications: Get real-time callbacks for delivery events No dependency on personal accounts: Production systems shouldn't rely on someone's Gmail The best part? With Upyo, switching providers requires minimal code changes—just swap the transport. Option A: Resend (modern and developer-friendly) Resend is a newer email service with an excellent developer experience. npm add @upyo/resend import { createMessage } from "@upyo/core"; import { ResendTransport } from "@upyo/resend"; const transport = new ResendTransport({ apiKey: process.env.RESEND_API_KEY!, }); const message = createMessage({ from: "hello@yourdomain.com", // Must be verified in Resend to: "user@example.com", subject: "Welcome aboard!", content: { text: "Thanks for joining us!", html: "<h1>Welcome!</h1><p>Thanks for joining us!</p>", }, tags: ["onboarding", "welcome"], // For analytics }); const receipt = await transport.send(message); if (receipt.successful) { console.log("Sent via Resend:", receipt.messageId); } Notice how similar this looks to the SMTP example? The only differences are the import and the transport configuration. Your message creation and sending logic stays exactly the same—that's Upyo's transport abstraction at work. Option B: SendGrid (enterprise-grade) SendGrid is a popular choice for high-volume senders, offering advanced analytics, template management, and a generous free tier. SendGrid is a popular choice for high-volume senders. npm add @upyo/sendgrid import { createMessage } from "@upyo/core"; import { SendGridTransport } from "@upyo/sendgrid"; const transport = new SendGridTransport({ apiKey: process.env.SENDGRID_API_KEY!, clickTracking: true, openTracking: true, }); const message = createMessage({ from: "notifications@yourdomain.com", to: "user@example.com", subject: "Your weekly digest", content: { html: "<h1>This Week's Highlights</h1><p>Here's what you missed...</p>", text: "This Week's Highlights\n\nHere's what you missed...", }, tags: ["digest", "weekly"], }); await transport.send(message); Option C: Mailgun (reliable workhorse) Mailgun offers robust infrastructure with strong EU support—important if you need GDPR-compliant data residency. npm add @upyo/mailgun import { createMessage } from "@upyo/core"; import { MailgunTransport } from "@upyo/mailgun"; const transport = new MailgunTransport({ apiKey: process.env.MAILGUN_API_KEY!, domain: "mg.yourdomain.com", region: "eu", // or "us" }); const message = createMessage({ from: "team@yourdomain.com", to: "user@example.com", subject: "Important update", content: { text: "We have some news to share..." }, }); await transport.send(message); Option D: Amazon SES (cost-effective at scale) Amazon SES is incredibly affordable—about $0.10 per 1,000 emails. If you're already in the AWS ecosystem, it integrates seamlessly with IAM, CloudWatch, and other services. npm add @upyo/ses import { createMessage } from "@upyo/core"; import { SesTransport } from "@upyo/ses"; const transport = new SesTransport({ authentication: { type: "credentials", accessKeyId: process.env.AWS_ACCESS_KEY_ID!, secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!, }, region: "us-east-1", configurationSetName: "my-config-set", // Optional: for tracking }); const message = createMessage({ from: "alerts@yourdomain.com", to: "admin@example.com", subject: "System alert", content: { text: "CPU usage exceeded 90%" }, priority: "high", }); await transport.send(message); Part 4: Sending emails from edge functions Here's where many email solutions fall short. Edge functions (Cloudflare Workers, Vercel Edge, Deno Deploy) run in a restricted environment—they can't open raw TCP connections, which means SMTP is not an option. You must use an HTTP-based transport like Resend, SendGrid, Mailgun, or Amazon SES. The good news? Your code barely changes. Cloudflare Workers example // src/index.ts import { createMessage } from "@upyo/core"; import { ResendTransport } from "@upyo/resend"; export default { async fetch(request: Request, env: Env): Promise<Response> { const transport = new ResendTransport({ apiKey: env.RESEND_API_KEY, }); const message = createMessage({ from: "noreply@yourdomain.com", to: "user@example.com", subject: "Request received", content: { text: "We got your request and are processing it." }, }); const receipt = await transport.send(message); if (receipt.successful) { return new Response(`Email sent: ${receipt.messageId}`); } else { return new Response(`Failed: ${receipt.errorMessages.join(", ")}`, { status: 500, }); } }, }; interface Env { RESEND_API_KEY: string; } Vercel Edge Functions example // app/api/send-email/route.ts import { createMessage } from "@upyo/core"; import { SendGridTransport } from "@upyo/sendgrid"; export const runtime = "edge"; export async function POST(request: Request) { const { to, subject, body } = await request.json(); const transport = new SendGridTransport({ apiKey: process.env.SENDGRID_API_KEY!, }); const message = createMessage({ from: "app@yourdomain.com", to, subject, content: { text: body }, }); const receipt = await transport.send(message); if (receipt.successful) { return Response.json({ success: true, messageId: receipt.messageId }); } else { return Response.json( { success: false, errors: receipt.errorMessages }, { status: 500 } ); } } Deno Deploy example // main.ts import { createMessage } from "jsr:@upyo/core"; import { MailgunTransport } from "jsr:@upyo/mailgun"; Deno.serve(async (request: Request) => { if (request.method !== "POST") { return new Response("Method not allowed", { status: 405 }); } const { to, subject, body } = await request.json(); const transport = new MailgunTransport({ apiKey: Deno.env.get("MAILGUN_API_KEY")!, domain: Deno.env.get("MAILGUN_DOMAIN")!, region: "us", }); const message = createMessage({ from: "noreply@yourdomain.com", to, subject, content: { text: body }, }); const receipt = await transport.send(message); if (receipt.successful) { return Response.json({ success: true, messageId: receipt.messageId }); } else { return Response.json( { success: false, errors: receipt.errorMessages }, { status: 500 } ); } }); Part 5: Improving deliverability with DKIM Ever wonder why some emails land in spam while others don't? Email authentication plays a huge role. DKIM (DomainKeys Identified Mail) is one of the key mechanisms—it lets you digitally sign your emails so recipients can verify they actually came from your domain and weren't tampered with in transit. Without DKIM: Your emails are more likely to be flagged as spam Recipients have no way to verify you're really who you claim to be Sophisticated phishing attacks can impersonate your domain Setting up DKIM with Upyo First, generate a DKIM key pair. You can use OpenSSL: # Generate a 2048-bit RSA private key openssl genrsa -out dkim-private.pem 2048 # Extract the public key openssl rsa -in dkim-private.pem -pubout -out dkim-public.pem Then configure your SMTP transport: import { createMessage } from "@upyo/core"; import { SmtpTransport } from "@upyo/smtp"; import { readFileSync } from "node:fs"; const transport = new SmtpTransport({ host: "smtp.example.com", port: 587, secure: false, auth: { user: "user@yourdomain.com", pass: "password", }, dkim: { signatures: [ { signingDomain: "yourdomain.com", selector: "mail", // Creates DNS record at mail._domainkey.yourdomain.com privateKey: readFileSync("./dkim-private.pem", "utf8"), algorithm: "rsa-sha256", // or "ed25519-sha256" for shorter keys }, ], }, }); The key configuration options: signingDomain: Must match your email's "From" domain selector: An arbitrary name that becomes part of your DNS record (e.g., mail creates a record at mail._domainkey.yourdomain.com) algorithm: RSA-SHA256 is widely supported; Ed25519-SHA256 offers shorter keys (see below) Adding the DNS record Add a TXT record to your domain's DNS: Name: mail._domainkey (or mail._domainkey.yourdomain.com depending on your DNS provider) Value: v=DKIM1; k=rsa; p=YOUR_PUBLIC_KEY_HERE Extract the public key value (remove headers, footers, and newlines from the .pem file): cat dkim-public.pem | grep -v "^-" | tr -d '\n' Using Ed25519 for shorter keys RSA-2048 keys are long—about 400 characters for the public key. This can be problematic because DNS TXT records have size limits, and some DNS providers struggle with long records. Ed25519 provides equivalent security with much shorter keys (around 44 characters). If your email infrastructure supports it, Ed25519 is the modern choice. # Generate Ed25519 key pair openssl genpkey -algorithm ed25519 -out dkim-ed25519-private.pem openssl pkey -in dkim-ed25519-private.pem -pubout -out dkim-ed25519-public.pem const transport = new SmtpTransport({ // ... other config dkim: { signatures: [ { signingDomain: "yourdomain.com", selector: "mail2025", privateKey: readFileSync("./dkim-ed25519-private.pem", "utf8"), algorithm: "ed25519-sha256", }, ], }, }); Part 6: Bulk email sending When you need to send emails to many recipients—newsletters, notifications, marketing campaigns—you have two approaches: The wrong way: looping with send() // ❌ Don't do this for bulk sending for (const subscriber of subscribers) { await transport.send(createMessage({ from: "newsletter@example.com", to: subscriber.email, subject: "Weekly update", content: { text: "..." }, })); } This works, but it's inefficient: Each send() call waits for the previous one to complete No automatic batching or optimization Harder to track overall progress The right way: using sendMany() The sendMany() method is designed for bulk operations: import { createMessage } from "@upyo/core"; import { ResendTransport } from "@upyo/resend"; const transport = new ResendTransport({ apiKey: process.env.RESEND_API_KEY!, }); const subscribers = [ { email: "alice@example.com", name: "Alice" }, { email: "bob@example.com", name: "Bob" }, { email: "charlie@example.com", name: "Charlie" }, // ... potentially thousands more ]; // Create personalized messages const messages = subscribers.map((subscriber) => createMessage({ from: "newsletter@yourdomain.com", to: subscriber.email, subject: "Your weekly digest", content: { html: `<h1>Hi ${subscriber.name}!</h1><p>Here's what's new this week...</p>`, text: `Hi ${subscriber.name}!\n\nHere's what's new this week...`, }, tags: ["newsletter", "weekly"], }) ); // Send all messages efficiently let successCount = 0; let failureCount = 0; for await (const receipt of transport.sendMany(messages)) { if (receipt.successful) { successCount++; } else { failureCount++; console.error("Failed:", receipt.errorMessages.join(", ")); } } console.log(`Sent: ${successCount}, Failed: ${failureCount}`); Why sendMany() is better: Automatic batching: Some transports (like Resend) combine multiple messages into a single API call Connection reuse: SMTP transport reuses connections from the pool Streaming results: You get receipts as they complete, not all at once Resilient: One failure doesn't stop the rest Progress tracking for large batches const totalMessages = messages.length; let processed = 0; for await (const receipt of transport.sendMany(messages)) { processed++; if (processed % 100 === 0) { console.log(`Progress: ${processed}/${totalMessages} (${Math.round((processed / totalMessages) * 100)}%)`); } if (!receipt.successful) { console.error(`Message ${processed} failed:`, receipt.errorMessages); } } console.log("Batch complete!"); When to use send() vs sendMany() Scenario Use Single transactional email (welcome, password reset) send() A few emails (under 10) send() in a loop is fine Newsletters, bulk notifications sendMany() Batch processing from a queue sendMany() Part 7: Testing without sending real emails Upyo includes a MockTransport for testing: No external dependencies: Tests run offline, in CI, anywhere Deterministic: No flaky tests due to network issues Fast: No HTTP requests or SMTP handshakes Inspectable: You can verify exactly what would have been sent Basic testing setup import { createMessage } from "@upyo/core"; import { MockTransport } from "@upyo/mock"; import assert from "node:assert"; import { describe, it, beforeEach } from "node:test"; describe("Email functionality", () => { let transport: MockTransport; beforeEach(() => { transport = new MockTransport(); }); it("should send welcome email after registration", async () => { // Your application code would call this const message = createMessage({ from: "welcome@yourapp.com", to: "newuser@example.com", subject: "Welcome to our app!", content: { text: "Thanks for signing up!" }, }); const receipt = await transport.send(message); // Assertions assert.strictEqual(receipt.successful, true); assert.strictEqual(transport.getSentMessagesCount(), 1); const sentMessage = transport.getLastSentMessage(); assert.strictEqual(sentMessage?.subject, "Welcome to our app!"); assert.strictEqual(sentMessage?.recipients[0].address, "newuser@example.com"); }); it("should handle email failures gracefully", async () => { // Simulate a failure transport.setNextResponse({ successful: false, errorMessages: ["Invalid recipient address"], }); const message = createMessage({ from: "test@yourapp.com", to: "invalid-email", subject: "Test", content: { text: "Test" }, }); const receipt = await transport.send(message); assert.strictEqual(receipt.successful, false); assert.ok(receipt.errorMessages.includes("Invalid recipient address")); }); }); The key testing methods: getSentMessagesCount(): How many emails were “sent” getLastSentMessage(): The most recent message getSentMessages(): All messages as an array setNextResponse(): Force the next send to succeed or fail with specific errors Simulating real-world conditions import { MockTransport } from "@upyo/mock"; // Simulate network delays const slowTransport = new MockTransport({ delay: 500, // 500ms delay per email }); // Simulate random failures (10% failure rate) const unreliableTransport = new MockTransport({ failureRate: 0.1, }); // Simulate variable latency const realisticTransport = new MockTransport({ randomDelayRange: { min: 100, max: 500 }, }); Testing async email workflows import { MockTransport } from "@upyo/mock"; const transport = new MockTransport(); // Start your async operation that sends emails startUserRegistration("newuser@example.com"); // Wait for the expected emails to be sent await transport.waitForMessageCount(2, 5000); // Wait for 2 emails, 5s timeout // Or wait for a specific email const welcomeEmail = await transport.waitForMessage( (msg) => msg.subject.includes("Welcome"), 3000 ); console.log("Welcome email was sent:", welcomeEmail.subject); Part 8: Provider failover with PoolTransport What happens if your email provider goes down? For mission-critical applications, you need redundancy. PoolTransport combines multiple providers with automatic failover—if one fails, it tries the next. import { PoolTransport } from "@upyo/pool"; import { ResendTransport } from "@upyo/resend"; import { SendGridTransport } from "@upyo/sendgrid"; import { MailgunTransport } from "@upyo/mailgun"; import { createMessage } from "@upyo/core"; // Create multiple transports const resend = new ResendTransport({ apiKey: process.env.RESEND_API_KEY! }); const sendgrid = new SendGridTransport({ apiKey: process.env.SENDGRID_API_KEY! }); const mailgun = new MailgunTransport({ apiKey: process.env.MAILGUN_API_KEY!, domain: "mg.yourdomain.com", }); // Combine them with priority-based failover const transport = new PoolTransport({ strategy: "priority", transports: [ { transport: resend, priority: 100 }, // Try first { transport: sendgrid, priority: 50 }, // Fallback { transport: mailgun, priority: 10 }, // Last resort ], maxRetries: 3, }); const message = createMessage({ from: "critical@yourdomain.com", to: "admin@example.com", subject: "Critical alert", content: { text: "This email will try multiple providers if needed." }, }); const receipt = await transport.send(message); // Automatically tries Resend first, then SendGrid, then Mailgun if others fail The priority values determine the order—higher numbers are tried first. If Resend fails (network error, rate limit, etc.), the pool automatically retries with SendGrid, then Mailgun. For more advanced routing strategies (weighted distribution, content-based routing), see the pool transport documentation. Part 9: Observability with OpenTelemetry In production, you'll want to track email metrics: send rates, failure rates, latency. Upyo integrates with OpenTelemetry: import { createOpenTelemetryTransport } from "@upyo/opentelemetry"; import { SmtpTransport } from "@upyo/smtp"; const baseTransport = new SmtpTransport({ host: "smtp.example.com", port: 587, auth: { user: "user", pass: "password" }, }); const transport = createOpenTelemetryTransport(baseTransport, { serviceName: "email-service", tracing: { enabled: true }, metrics: { enabled: true }, }); // Now all email operations generate traces and metrics automatically await transport.send(message); This gives you: Delivery success/failure rates Send operation latency histograms Error classification by type Distributed tracing for debugging See the OpenTelemetry documentation for details. Quick reference: choosing the right transport Scenario Recommended Transport Development/testing Gmail SMTP or MockTransport Small production app Resend or SendGrid High volume (100k+/month) Amazon SES Edge functions Resend, SendGrid, or Mailgun Self-hosted infrastructure SMTP with DKIM Mission-critical PoolTransport with failover EU data residency Mailgun (EU region) or self-hosted Wrapping up This guide covered the most popular transports, but Upyo also supports: JMAP: Modern email protocol (RFC 8620/8621) for JMAP-compatible servers like Fastmail and Stalwart Plunk: Developer-friendly email service with self-hosting option And you can always create a custom transport for any email service not yet supported. Resources 📚 Documentation 📦 npm packages 📦 JSR packages 🐙 GitHub repository Have questions or feedback? Feel free to open an issue. What's been your biggest pain point when sending emails from JavaScript? Let me know in the comments—I'm curious what challenges others have run into. Upyo (pronounced /oo-pyo/) comes from the Korean word 郵票, meaning “postage stamp.”
  • 0 Votes
    1 Posts
    11 Views
    We're pleased to announce the release of Optique 0.5.0, which brings significant improvements to error handling, help text generation, and overall developer experience. This release maintains full backward compatibility, so you can upgrade without modifying existing code. Better code organization through module separation The large @optique/core/parser module has been refactored into three focused modules that better reflect their purposes. Primitive parsers like option() and argument() now live in @optique/core/primitives, modifier functions such as optional() and withDefault() have moved to @optique/core/modifiers, and combinator functions including object() and or() are now in @optique/core/constructs. // Before: everything from one module import { option, flag, argument, // primitives optional, withDefault, multiple, // modifiers object, or, merge // constructs } from "@optique/core/parser"; // After: organized imports (recommended) import { option, flag, argument } from "@optique/core/primitives"; import { optional, withDefault, multiple } from "@optique/core/modifiers"; import { object, or, merge } from "@optique/core/constructs"; While we recommend importing from these specialized modules for better clarity, all functions continue to be re-exported from the original @optique/core/parser module to ensure your existing code works unchanged. This reorganization makes the codebase more maintainable and helps developers understand the relationships between different parser types. Smarter error handling with automatic conversion One of the most requested features has been better error handling for default value callbacks in withDefault(). Previously, if your callback threw an error—say, when an environment variable wasn't set—that error would bubble up as a runtime exception. Starting with 0.5.0, these errors are automatically caught and converted to parser-level errors, providing consistent error formatting and proper exit codes. // Before (0.4.x): runtime exception that crashes the app const parser = object({ apiUrl: withDefault(option("--url", url()), () => { if (!process.env.API_URL) { throw new Error("API_URL not set"); // Uncaught exception! } return new URL(process.env.API_URL); }) }); // After (0.5.0): graceful parser error const parser = object({ apiUrl: withDefault(option("--url", url()), () => { if (!process.env.API_URL) { throw new Error("API_URL not set"); // Automatically caught and formatted } return new URL(process.env.API_URL); }) }); We've also introduced the WithDefaultError class, which accepts structured messages instead of plain strings. This means you can now throw errors with rich formatting that matches the rest of Optique's error output: import { WithDefaultError, message, envVar } from "@optique/core"; const parser = object({ // Plain error - automatically converted to text databaseUrl: withDefault(option("--db", url()), () => { if (!process.env.DATABASE_URL) { throw new Error("Database URL not configured"); } return new URL(process.env.DATABASE_URL); }), // Rich error with structured message apiToken: withDefault(option("--token", string()), () => { if (!process.env.API_TOKEN) { throw new WithDefaultError( message`Environment variable ${envVar("API_TOKEN")} is required for authentication` ); } return process.env.API_TOKEN; }) }); The new envVar message component ensures environment variables are visually distinct in error messages, appearing bold and underlined in colored output or wrapped in backticks in plain text. More helpful help text with custom default descriptions Default values in help text can sometimes be misleading, especially when they come from environment variables or are computed at runtime. Optique 0.5.0 allows you to customize how default values appear in help output through an optional third parameter to withDefault(). import { withDefault, message, envVar } from "@optique/core"; const parser = object({ // Before: shows actual URL value in help apiUrl: withDefault( option("--api-url", url()), new URL("https://api.example.com") ), // Help shows: --api-url URL [https://api.example.com] // After: shows descriptive text apiUrl: withDefault( option("--api-url", url()), new URL("https://api.example.com"), { message: message`Default API endpoint` } ), // Help shows: --api-url URL [Default API endpoint] }); This is particularly useful for environment variables and computed defaults: const parser = object({ // Environment variable authToken: withDefault( option("--token", string()), () => process.env.AUTH_TOKEN || "anonymous", { message: message`${envVar("AUTH_TOKEN")} or anonymous` } ), // Help shows: --token STRING [AUTH_TOKEN or anonymous] // Computed value workers: withDefault( option("--workers", integer()), () => os.cpus().length, { message: message`Number of CPU cores` } ), // Help shows: --workers INT [Number of CPU cores] // Sensitive information apiKey: withDefault( option("--api-key", string()), () => process.env.SECRET_KEY || "", { message: message`From secure storage` } ), // Help shows: --api-key STRING [From secure storage] }); Instead of displaying the actual default value, you can now show descriptive text that better explains where the value comes from. This is particularly useful for sensitive information like API tokens or for computed defaults like the number of CPU cores. The help system now properly handles ANSI color codes in default value displays, maintaining dim styling even when inner components have their own color formatting. This ensures default values remain visually distinct from the main help text. Comprehensive error message customization We've added a systematic way to customize error messages across all parser types and combinators. Every parser now accepts an errors option that lets you provide context-specific feedback instead of generic error messages. This applies to primitive parsers, value parsers, combinators, and even specialized parsers in companion packages. Primitive parser errors import { option, flag, argument, command } from "@optique/core/primitives"; import { message, optionName, metavar } from "@optique/core/message"; // Option parser with custom errors const serverPort = option("--port", integer(), { errors: { missing: message`Server port is required. Use ${optionName("--port")} to specify.`, invalidValue: (error) => message`Invalid port number: ${error}`, endOfInput: message`${optionName("--port")} requires a ${metavar("PORT")} number.` } }); // Command parser with custom errors const deployCommand = command("deploy", deployParser, { errors: { notMatched: (expected, actual) => message`Unknown command "${actual}". Did you mean "${expected}"?` } }); Value parser errors Error customization can be static messages for consistent errors or dynamic functions that incorporate the problematic input: import { integer, choice, string } from "@optique/core/valueparser"; // Integer with range validation const port = integer({ min: 1024, max: 65535, errors: { invalidInteger: message`Port must be a valid number.`, belowMinimum: (value, min) => message`Port ${String(value)} is reserved. Use ${String(min)} or higher.`, aboveMaximum: (value, max) => message`Port ${String(value)} exceeds maximum. Use ${String(max)} or lower.` } }); // Choice with helpful suggestions const logLevel = choice(["debug", "info", "warn", "error"], { errors: { invalidChoice: (input, choices) => message`"${input}" is not a valid log level. Choose from: ${values(choices)}.` } }); // String with pattern validation const email = string({ pattern: /^[^@]+@[^@]+\.[^@]+$/, errors: { patternMismatch: (input) => message`"${input}" is not a valid email address. Use format: user@example.com` } }); Combinator errors import { or, multiple, object } from "@optique/core/constructs"; // Or combinator with custom no-match error const format = or( flag("--json"), flag("--yaml"), flag("--xml"), { errors: { noMatch: message`Please specify an output format: --json, --yaml, or --xml.`, unexpectedInput: (token) => message`Unknown format option "${token}".` } } ); // Multiple parser with count validation const inputFiles = multiple(argument(string()), { min: 1, max: 5, errors: { tooFew: (count, min) => message`At least ${String(min)} file required, but got ${String(count)}.`, tooMany: (count, max) => message`Maximum ${String(max)} files allowed, but got ${String(count)}.` } }); Package-specific errors Both @optique/run and @optique/temporal packages have been updated with error customization support for their specialized parsers: // @optique/run path parser import { path } from "@optique/run/valueparser"; const configFile = option("--config", path({ mustExist: true, type: "file", extensions: [".json", ".yaml"], errors: { pathNotFound: (input) => message`Configuration file "${input}" not found. Please check the path.`, notAFile: (input) => message`"${input}" is a directory. Please specify a file.`, invalidExtension: (input, extensions, actual) => message`Invalid config format "${actual}". Use ${values(extensions)}.` } })); // @optique/temporal instant parser import { instant, duration } from "@optique/temporal"; const timestamp = option("--time", instant({ errors: { invalidFormat: (input) => message`"${input}" is not a valid timestamp. Use ISO 8601 format: 2024-01-01T12:00:00Z` } })); const timeout = option("--timeout", duration({ errors: { invalidFormat: (input) => message`"${input}" is not a valid duration. Use ISO 8601 format: PT30S (30 seconds), PT5M (5 minutes)` } })); Error customization integrates seamlessly with Optique's structured message format, ensuring consistent styling across all error output. The system helps you provide helpful, actionable feedback that guides users toward correct usage rather than leaving them confused by generic error messages. Looking forward This release focuses on improving the developer experience without breaking existing code. Every new feature is opt-in, and all changes maintain backward compatibility. We believe these improvements make Optique more pleasant to work with, especially when building user-friendly CLI applications that need clear error messages and helpful documentation. We're grateful to the community members who suggested these improvements and helped shape this release through discussions and issue reports. Your feedback continues to drive Optique's evolution toward being a more capable and ergonomic CLI parser for TypeScript. To upgrade to Optique 0.5.0, simply update your dependencies: npm update @optique/core @optique/run # or deno update For detailed migration guidance and API documentation, please refer to the official documentation. While no code changes are required, we encourage you to explore the new error customization options and help text improvements to enhance your CLI applications.
  • 0 Votes
    5 Posts
    17 Views
    @shollyethan oooh thank you for the heads-up Ethan! 🙏I’m not close to being ready yet (I need a few more months)… but good to hear there’s a v6.1 already 😅
  • 0 Votes
    1 Posts
    15 Views
    :botkit: Introducing #BotKit: A #TypeScript framework for creating truly standalone #ActivityPub bots! Unlike traditional Mastodon bots, BotKit lets you build fully independent #fediverse bots that aren't constrained by platform limits. Create your entire bot in a single TypeScript file using our simple, expressive API. Currently #Deno-only, with Node.js & Bun support planned. Built on the robust @fedify@hollo.social foundation. https://botkit.fedify.dev/