Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

Today marks 7 months of unemployment and I am very much down to the wire.


Gli ultimi otto messaggi ricevuti dalla Federazione
Post suggeriti
  • 0 Votes
    3 Posts
    12 Views
    @grandboss She is using djinni. LinkedIn is trashy trash, but it is a necessity to improve your chances. Thanks for pointing to indeed.com.
  • 0 Votes
    1 Posts
    7 Views
    I recently added sync/async mode support to Optique, a type-safe CLI parser for TypeScript. It turned out to be one of the trickier features I've implemented—the object() combinator alone needed to compute a combined mode from all its child parsers, and TypeScript's inference kept hitting edge cases. What is Optique? Optique is a type-safe, combinatorial CLI parser for TypeScript, inspired by Haskell's optparse-applicative. Instead of decorators or builder patterns, you compose small parsers into larger ones using combinators, and TypeScript infers the result types. Here's a quick taste: import { object } from "@optique/core/constructs"; import { argument, option } from "@optique/core/primitives"; import { string, integer } from "@optique/core/valueparser"; import { run } from "@optique/run"; const cli = object({ name: argument(string()), count: option("-n", "--count", integer()), }); // TypeScript infers: { name: string; count: number | undefined } const result = run(cli); // sync by default The type inference works through arbitrarily deep compositions—in most cases, you don't need explicit type annotations. How it started Lucas Garron (@lgarron@mastodon.social) opened an issue requesting async support for shell completions. He wanted to provide <kbd>Tab</kbd>-completion suggestions by running shell commands like git for-each-ref to list branches and tags. // Lucas's example: fetching Git branches and tags in parallel const [branches, tags] = await Promise.all([ $`git for-each-ref --format='%(refname:short)' refs/heads/`.text(), $`git for-each-ref --format='%(refname:short)' refs/tags/`.text(), ]); At first, I didn't like the idea. Optique's entire API was synchronous, which made it simpler to reason about and avoided the “async infection” problem where one async function forces everything upstream to become async. I argued that shell completion should be near-instantaneous, and if you need async data, you should cache it at startup. But Lucas pushed back. The filesystem is a database, and many useful completions inherently require async work—Git refs change constantly, and pre-caching everything at startup doesn't scale for large repos. Fair point. What I needed to solve So, how do you support both sync and async execution modes in a composable parser library while maintaining type safety? The key requirements were: parse() returns T or Promise<T> complete() returns T or Promise<T> suggest() returns Iterable<T> or AsyncIterable<T> When combining parsers, if any parser is async, the combined result must be async Existing sync code should continue to work unchanged The fourth requirement is the tricky one. Consider this: const syncParser = flag("--verbose"); const asyncParser = option("--branch", asyncValueParser); // What's the type of this? const combined = object({ verbose: syncParser, branch: asyncParser }); The combined parser should be async because one of its fields is async. This means we need type-level logic to compute the combined mode. Five design options I explored five different approaches, each with its own trade-offs. Option A: conditional types with mode parameter Add a mode type parameter to Parser and use conditional types: type Mode = "sync" | "async"; type ModeValue<M extends Mode, T> = M extends "async" ? Promise<T> : T; interface Parser<M extends Mode, TValue, TState> { parse(context: ParserContext<TState>): ModeValue<M, ParserResult<TState>>; // ... } The challenge is computing combined modes: type CombineModes<T extends Record<string, Parser<any, any, any>>> = T[keyof T] extends Parser<infer M, any, any> ? M extends "async" ? "async" : "sync" : never; Option B: mode parameter with default value A variant of Option A, but place the mode parameter first with a default of "sync": interface Parser<M extends Mode = "sync", TValue, TState> { readonly $mode: M; // ... } The default value maintains backward compatibility—existing user code keeps working without changes. Option C: separate interfaces Define completely separate Parser and AsyncParser interfaces with explicit conversion: interface Parser<TValue, TState> { /* sync methods */ } interface AsyncParser<TValue, TState> { /* async methods */ } function toAsync<T, S>(parser: Parser<T, S>): AsyncParser<T, S>; Simpler to understand, but requires code duplication and explicit conversions. Option D: union return types for suggest() only The minimal approach. Only allow suggest() to be async: interface Parser<TValue, TState> { parse(context: ParserContext<TState>): ParserResult<TState>; // always sync suggest(context: ParserContext<TState>, prefix: string): Iterable<Suggestion> | AsyncIterable<Suggestion>; // can be either } This addresses the original use case but doesn't help if async parse() is ever needed. Option E: fp-ts style HKT simulation Use the technique from fp-ts to simulate Higher-Kinded Types: interface URItoKind<A> { Identity: A; Promise: Promise<A>; } type Kind<F extends keyof URItoKind<any>, A> = URItoKind<A>[F]; interface Parser<F extends keyof URItoKind<any>, TValue, TState> { parse(context: ParserContext<TState>): Kind<F, ParserResult<TState>>; } The most flexible approach, but with a steep learning curve. Testing the idea Rather than commit to an approach based on theoretical analysis, I created a prototype to test how well TypeScript handles the type inference in practice. I published my findings in the GitHub issue: Both approaches correctly handle the “any async → all async” rule at the type level. (…) Complex conditional types like ModeValue<CombineParserModes<T>, ParserResult<TState>> sometimes require explicit type casting in the implementation. This only affects library internals. The user-facing API remains clean. The prototype validated that Option B (explicit mode parameter with default) would work. I chose it for these reasons: Backward compatible: The default "sync" keeps existing code working Explicit: The mode is visible in both types and runtime (via a $mode property) Debuggable: Easy to inspect the current mode at runtime Better IDE support: Type information is more predictable How CombineModes works The CombineModes type computes whether a combined parser should be sync or async: type CombineModes<T extends readonly Mode[]> = "async" extends T[number] ? "async" : "sync"; This type checks if "async" is present anywhere in the tuple of modes. If so, the result is "async"; otherwise, it's "sync". For combinators like object(), I needed to extract modes from parser objects and combine them: // Extract the mode from a single parser type ParserMode<T> = T extends Parser<infer M, unknown, unknown> ? M : never; // Combine modes from all values in a record of parsers type CombineObjectModes<T extends Record<string, Parser<Mode, unknown, unknown>>> = CombineModes<{ [K in keyof T]: ParserMode<T[K]> }[keyof T][]>; Runtime implementation The type system handles compile-time safety, but the implementation also needs runtime logic. Each parser has a $mode property that indicates its execution mode: const syncParser = option("-n", "--name", string()); console.log(syncParser.$mode); // "sync" const asyncParser = option("-b", "--branch", asyncValueParser); console.log(asyncParser.$mode); // "async" Combinators compute their mode at construction time: function object<T extends Record<string, Parser<Mode, unknown, unknown>>>( parsers: T ): Parser<CombineObjectModes<T>, ObjectValue<T>, ObjectState<T>> { const parserKeys = Reflect.ownKeys(parsers); const combinedMode: Mode = parserKeys.some( (k) => parsers[k as keyof T].$mode === "async" ) ? "async" : "sync"; // ... implementation } Refining the API Lucas suggested an important refinement during our discussion. Instead of having run() automatically choose between sync and async based on the parser mode, he proposed separate functions: Perhaps run(…) could be automatic, and runSync(…) and runAsync(…) could enforce that the inferred type matches what is expected. So we ended up with: run(): automatic based on parser mode runSync(): enforces sync mode at compile time runAsync(): enforces async mode at compile time // Automatic: returns T for sync parsers, Promise<T> for async const result1 = run(syncParser); // string const result2 = run(asyncParser); // Promise<string> // Explicit: compile-time enforcement const result3 = runSync(syncParser); // string const result4 = runAsync(asyncParser); // Promise<string> // Compile error: can't use runSync with async parser const result5 = runSync(asyncParser); // Type error! I applied the same pattern to parse()/parseSync()/parseAsync() and suggest()/suggestSync()/suggestAsync() in the facade functions. Creating async value parsers With the new API, creating an async value parser for Git branches looks like this: import type { Suggestion } from "@optique/core/parser"; import type { ValueParser, ValueParserResult } from "@optique/core/valueparser"; function gitRef(): ValueParser<"async", string> { return { $mode: "async", metavar: "REF", parse(input: string): Promise<ValueParserResult<string>> { return Promise.resolve({ success: true, value: input }); }, format(value: string): string { return value; }, async *suggest(prefix: string): AsyncIterable<Suggestion> { const { $ } = await import("bun"); const [branches, tags] = await Promise.all([ $`git for-each-ref --format='%(refname:short)' refs/heads/`.text(), $`git for-each-ref --format='%(refname:short)' refs/tags/`.text(), ]); for (const ref of [...branches.split("\n"), ...tags.split("\n")]) { const trimmed = ref.trim(); if (trimmed && trimmed.startsWith(prefix)) { yield { kind: "literal", text: trimmed }; } } }, }; } Notice that parse() returns Promise.resolve() even though it's synchronous. This is because the ValueParser<"async", T> type requires all methods to use async signatures. Lucas pointed out this is a minor ergonomic issue. If only suggest() needs to be async, you still have to wrap parse() in a Promise. I considered per-method mode granularity (e.g., ValueParser<ParseMode, SuggestMode, T>), but the implementation complexity would multiply substantially. For now, the workaround is simple enough: // Option 1: Use Promise.resolve() parse(input) { return Promise.resolve({ success: true, value: input }); } // Option 2: Mark as async and suppress the linter // biome-ignore lint/suspicious/useAwait: sync implementation in async ValueParser async parse(input) { return { success: true, value: input }; } What it cost Supporting dual modes added significant complexity to Optique's internals. Every combinator needed updates: Type signatures grew more complex with mode parameters Mode propagation logic had to be added to every combinator Dual implementations were needed for sync and async code paths Type casts were sometimes necessary in the implementation to satisfy TypeScript For example, the object() combinator went from around 100 lines to around 250 lines. The internal implementation uses conditional logic based on the combined mode: if (combinedMode === "async") { return { $mode: "async" as M, // ... async implementation with Promise chains async parse(context) { // ... await each field's parse result }, }; } else { return { $mode: "sync" as M, // ... sync implementation parse(context) { // ... directly call each field's parse }, }; } This duplication is the cost of supporting both modes without runtime overhead for sync-only use cases. Lessons learned Listen to users, but validate with prototypes My initial instinct was to resist async support. Lucas's persistence and concrete examples changed my mind, but I validated the approach with a prototype before committing. The prototype revealed practical issues (like TypeScript inference limits) that pure design analysis would have missed. Backward compatibility is worth the complexity Making "sync" the default mode meant existing code continued to work unchanged. This was a deliberate choice. Breaking changes should require user action, not break silently. Unified mode vs per-method granularity I chose unified mode (all methods share the same sync/async mode) over per-method granularity. This means users occasionally write Promise.resolve() for methods that don't actually need async, but the alternative was multiplicative complexity in the type system. Designing in public The entire design process happened in a public GitHub issue. Lucas, Giuseppe, and others contributed ideas that shaped the final API. The runSync()/runAsync() distinction came directly from Lucas's feedback. Conclusion This was one of the more challenging features I've implemented in Optique. TypeScript's type system is powerful enough to encode the “any async means all async” rule at compile time, but getting there required careful design work and prototyping. What made it work: conditional types like ModeValue<M, T> can bridge the gap between sync and async worlds. You pay for it with implementation complexity, but the user-facing API stays clean and type-safe. Optique 0.9.0 with async support is currently in pre-release testing. If you'd like to try it, check out PR #70 or install the pre-release: npm add @optique/core@0.9.0-dev.212 @optique/run@0.9.0-dev.212 deno add --jsr @optique/core@0.9.0-dev.212 @optique/run@0.9.0-dev.212 Feedback is welcome!
  • 0 Votes
    1 Posts
    7 Views
    Fedify 1.10.0: Observability foundations for the future debug dashboard Fedify is a #TypeScript framework for building #ActivityPub servers that participate in the #fediverse. It reduces the complexity and boilerplate typically required for ActivityPub implementation while providing comprehensive federation capabilities. We're excited to announce #Fedify 1.10.0, a focused release that lays critical groundwork for future debugging and observability features. Released on December 24, 2025, this version introduces infrastructure improvements that will enable the upcoming debug dashboard while maintaining full backward compatibility with existing Fedify applications. This release represents a transitional step toward Fedify 2.0.0, introducing optional capabilities that will become standard in the next major version. The changes focus on enabling richer observability through OpenTelemetry enhancements and adding prefix scanning capabilities to the key–value store interface. Enhanced OpenTelemetry instrumentation Fedify 1.10.0 significantly expands OpenTelemetry instrumentation with span events that capture detailed ActivityPub data. These enhancements enable richer observability and debugging capabilities without relying solely on span attributes, which are limited to primitive values. The new span events provide complete activity payloads and verification status, making it possible to build comprehensive debugging tools that show the full context of federation operations: activitypub.activity.received event on activitypub.inbox span — records the full activity JSON, verification status (activity verified, HTTP signatures verified, Linked Data signatures verified), and actor information activitypub.activity.sent event on activitypub.send_activity span — records the full activity JSON and target inbox URL activitypub.object.fetched event on activitypub.lookup_object span — records the fetched object's type and complete JSON-LD representation Additionally, Fedify now instruments previously uncovered operations: activitypub.fetch_document span for document loader operations, tracking URL fetching, HTTP redirects, and final document URLs activitypub.verify_key_ownership span for cryptographic key ownership verification, recording actor ID, key ID, verification result, and the verification method used These instrumentation improvements emerged from work on issue #234 (Real-time ActivityPub debug dashboard). Rather than introducing a custom observer interface as originally proposed in #323, we leveraged Fedify's existing OpenTelemetry infrastructure to capture rich federation data through span events. This approach provides a standards-based foundation that's composable with existing observability tools like Jaeger, Zipkin, and Grafana Tempo. Distributed trace storage with FedifySpanExporter Building on the enhanced instrumentation, Fedify 1.10.0 introduces FedifySpanExporter, a new OpenTelemetry SpanExporter that persists ActivityPub activity traces to a KvStore. This enables distributed tracing support across multiple nodes in a Fedify deployment, which is essential for building debug dashboards that can show complete request flows across web servers and background workers. The new @fedify/fedify/otel module provides the following types and interfaces: import { MemoryKvStore } from "@fedify/fedify"; import { FedifySpanExporter } from "@fedify/fedify/otel"; import { BasicTracerProvider, SimpleSpanProcessor, } from "@opentelemetry/sdk-trace-base"; const kv = new MemoryKvStore(); const exporter = new FedifySpanExporter(kv, { ttl: Temporal.Duration.from({ hours: 1 }), }); const provider = new BasicTracerProvider(); provider.addSpanProcessor(new SimpleSpanProcessor(exporter)); The stored traces can be queried for display in debugging interfaces: // Get all activities for a specific trace const activities = await exporter.getActivitiesByTraceId(traceId); // Get recent traces with summary information const recentTraces = await exporter.getRecentTraces({ limit: 100 }); The exporter supports two storage strategies depending on the KvStore capabilities. When the list() method is available (preferred), it stores individual records with keys like [prefix, traceId, spanId]. When only cas() is available, it uses compare-and-swap operations to append records to arrays stored per trace. This infrastructure provides the foundation for implementing a comprehensive debug dashboard as a custom SpanExporter, as outlined in the updated implementation plan for issue #234. Optional list() method for KvStore interface Fedify 1.10.0 adds an optional list() method to the KvStore interface for enumerating entries by key prefix. This method enables efficient prefix scanning, which is useful for implementing features like distributed trace storage, cache invalidation by prefix, and listing related entries. interface KvStore { // ... existing methods list?(prefix?: KvKey): AsyncIterable<KvStoreListEntry>; } When the prefix parameter is omitted or empty, list() returns all entries in the store. This is useful for debugging and administrative purposes. All official KvStore implementations have been updated to support this method: MemoryKvStore — filters in-memory keys by prefix SqliteKvStore — uses LIKE query with JSON key pattern PostgresKvStore — uses array slice comparison RedisKvStore — uses SCAN with pattern matching and key deserialization DenoKvStore — delegates to Deno KV's built-in list() API WorkersKvStore — uses Cloudflare Workers KV list() with JSON key prefix pattern While list() is currently optional to give existing custom KvStore implementations time to add support, it will become a required method in Fedify 2.0.0 (tracked in issue #499). This migration path allows implementers to gradually adopt the new capability throughout the 1.x release cycle. The addition of list() support was implemented in pull request #500, which also included the setup of proper testing infrastructure for WorkersKvStore using Vitest with @cloudflare/vitest-pool-workers. NestJS 11 and Express 5 support Thanks to a contribution from Cho Hasang (@crohasang@hackers.pub), the @fedify/nestjs package now supports NestJS 11 environments that use Express 5. The peer dependency range for Express has been widened to ^4.0.0 || ^5.0.0, eliminating peer dependency conflicts in modern NestJS projects while maintaining backward compatibility with Express 4. This change, implemented in pull request #493, keeps the workspace catalog pinned to Express 4 for internal development and test stability while allowing Express 5 in consuming applications. What's next Fedify 1.10.0 serves as a stepping stone toward the upcoming 2.0.0 release. The optional list() method introduced in this version will become required in 2.0.0, simplifying the interface contract and allowing Fedify internals to rely on prefix scanning being universally available. The enhanced #OpenTelemetry instrumentation and FedifySpanExporter provide the foundation for implementing the debug dashboard proposed in issue #234. The next steps include building the web dashboard UI with real-time activity lists, filtering, and JSON inspection capabilities—all as a separate package that leverages the standards-based observability infrastructure introduced in this release. Depending on the development timeline and feature priorities, there may be additional 1.x releases before the 2.0.0 migration. For developers building custom KvStore implementations, now is the time to add list() support to prepare for the eventual 2.0.0 upgrade. The implementation patterns used in the official backends provide clear guidance for various storage strategies. Acknowledgments Special thanks to Cho Hasang (@crohasang@hackers.pub) for the NestJS 11 compatibility improvements, and to all community members who provided feedback and testing for the new observability features. For the complete list of changes, bug fixes, and improvements, please refer to the CHANGES.md file in the repository. #fedidev #release
  • Drunk CSS

    Uncategorized css drunk html webdev
    1
    1
    0 Votes
    1 Posts
    4 Views
    Drunk CSS https://shkspr.mobi/blog/2025/09/drunk-css/ A decade ago, I was writing about how you should test your user interface on drunk people. It was a semi-serious idea. Some of your users will be drunk when using your app or website. If it is easy for them to use, then it should be easy for sober people to use.Of course, necking a few shots every time you update your website isn't great for your health - so is there another way?Click the "🥴 Drunk" button at the top of the page and see what happens!These are a relatively simple set of CSS rules which you can apply to any site in order to simulate inebriation.(I may have changed these since writing the post. Check the source for the latest version.)First, monkey around with the fonts. This sets all the lower-case vowels to be rendered in a different font - as discussed in "targetting specific characters with CSS rules": CSS/* Drunk */ @font-face { font-family: "Drunk"; src: url("/blog/wp-content/themes/edent-wordpress-theme/assets/fonts/CommitMonoV143-Edent.woff2") format("woff2"); /* Lower-Case Vowels */ unicode-range: U+61, U+65, U+69, U+6F, U+75 ; size-adjust: 105%; } The rest of the characters will be rendered in the system's default Cursive font. Characters will also be slanted. The first character of every paragraph will be shrunk: CSS:root:has(input#drunk:checked) * { font-family: "Drunk", cursive; font-style: oblique -12deg; text-align: end; } :root:has(input#drunk:checked) p::first-letter { font-size: .5em; } Next, use the child selectors to rotate and skew various elements. While we wait for CSS randomness to come to all browsers this is a simple way to select various elements: CSS:root:has(input#drunk:checked) *:nth-child(3n) { transform: rotate(2deg); } :root:has(input#drunk:checked) *:nth-child(5n) { transform: skew(5deg, 5deg); } :root:has(input#drunk:checked) *:nth-child(7n) { transform: rotate(-3deg); } Make the entire page blurred and saturate the colours: CSS:root:has(input#drunk:checked) body { filter: blur(1px) saturate(2.5); } Make any hyperlink harder to click by having it gently bounce up and down: CSS:root:has(input#drunk:checked) a { animation-name: bounce; animation-duration: 4s; animation-direction: alternate; animation-timing-function: ease-in-out; animation-iteration-count: infinite; } @keyframes bounce { 0% { margin-top: 0px; } 25% { margin-top:-10px; } 50% { margin-top: 0px; } 75% { margin-top: 10px; } 100% { margin-top: 0px; } } Does this really simulate drunkenness? No. It is a pale simulacrum. What it is, however, is deliberately inaccessible to the majority of people.How does it make you feel using the site in Drunk-Mode? Does it frustrate you? Do your eyes hurt due to the garish colour scheme? Do you keep missing the thing that you try and click on? Are the words so hard to read that it takes you extra time to do anything useful? Will you recommend this experience to your friends and family?I've written before about cosplaying as being disabled. Strapping on a pair of Glaucoma Goggles will give you an idea of what a visual impairment is like. But it won't give you the experience of living that way for months or years.You should test your stuff with people who have cognitive impairments or physical disabilities. Find out how usable your site is for someone lacking fine motor control or for those with learning disabilities. Pay disable people to take part in usability studies. Integrate their feedback.Faffing around with CSS will only get you so far. #css #drunk #HTML #ui #ux #webdev