Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

@notfire google's gaslighting you

Uncategorized
1 1 0

Gli ultimi otto messaggi ricevuti dalla Federazione
Post suggeriti
  • 0 Votes
    1 Posts
    0 Views
    #pastpuzzle en-147๐ŸŸฉ๐ŸŸฅ๐ŸŸฅ๐ŸŸจ (-185)๐ŸŸฉ๐ŸŸฉ๐ŸŸฅ๐ŸŸฅ (+37)๐ŸŸฉ๐ŸŸฉ๐ŸŸฅ๐ŸŸฅ (+59)๐ŸŸฉ๐ŸŸฉ๐ŸŸฅ๐ŸŸฉ (-10)x/4 ๐ŸŸฅhttps://www.pastpuzzle.deOggi no...
  • 0 Votes
    1 Posts
    0 Views
    cross-posted from: https://lemmy.dbzer0.com/post/55501944 Hey, Iโ€™ve been kicking around an idea for a bot: it would look for fanfiction links shared in a server and keep track of which ones get shared the most. The concept: Track sites like AO3, FanFiction.net, ScribbleHub, etc. Count how often each link gets posted Commands to see the โ€œtopโ€ links or which domains are tracked Itโ€™s just a rough idea, I havenโ€™t built it yet. Curious if anyone thinks this would actually be useful or has tips for implementing it without overcomplicating things. import re import json import discord from collections import Counter from discord.ext import commands TOKEN = "YOUR_BOT_TOKEN" intents = discord.Intents.default() intents.message_content = True bot = commands.Bot(command_prefix="!", intents=intents) # Domain list you want to track (lower-case) TRACK_DOMAINS = { "archiveofourown.org", "fanfiction.net", "forum.questionablequesting.com", "forums.spacebattles.com", "forums.sufficientvelocity.com", "webnovel.com", "hentai-foundry.com", "scribblehub.com", } link_pattern = re.compile(r'https?://\S+') link_counter = Counter() def domain_of_url(url: str) -> str | None: try: # Extract domain part from urllib.parse import urlparse parsed = urlparse(url) domain = parsed.netloc.lower() # remove leading โ€œwww.โ€ if domain.startswith("www."): domain = domain[4:] return domain except Exception: return None def save_links(): with open("links.json", "w") as f: # convert counts to dict json.dump(dict(link_counter), f) def load_links(): try: with open("links.json") as f: data = json.load(f) for link, cnt in data.items(): link_counter[link] = cnt except FileNotFoundError: pass @bot.event async def on_ready(): load_links() print(f"Bot is ready. Logged in as {bot.user}") @bot.event async def on_message(message): if message.author.bot: return links = link_pattern.findall(message.content) for link in links: dom = domain_of_url(link) if dom in TRACK_DOMAINS: link_counter[link] += 1 await bot.process_commands(message) @bot.command(name="links") async def links(ctx, top: int = 10): if not link_counter: await ctx.send("No links recorded.") return sorted_links = sorted(link_counter.items(), key=lambda x: x[1], reverse=True) display = sorted_links[:top] lines = [f"{link} โ€” {count}" for link, count in display] await ctx.send("**Top links:**\n" + "\n".join(lines)) @bot.command(name="domains") async def domains(ctx): """Show which domains are tracked.""" await ctx.send("Tracked domains: " + ", ".join(sorted(TRACK_DOMAINS))) @bot.command(name="dump") async def dump(ctx): """For admin: dump full counts (might be large).""" if not link_counter: await ctx.send("No data.") return lines = [f"{link} โ€” {cnt}" for link, cnt in sorted(link_counter.items(), key=lambda x: x[1], reverse=True)] chunk = "\n".join(lines) # Discord message length limit; you may need to split await ctx.send(f"All counts:\n{chunk}") @bot.event async def on_disconnect(): save_links() bot.run(TOKEN)
  • Georgia folks.

    Uncategorized
    1
    0 Votes
    1 Posts
    0 Views
    Georgia folks. https://www.youtube.com/watch?v=UgvE_gPi7Kc
  • 0 Votes
    7 Posts
    0 Views
    @PaoloParti @eccosilvia preferirei di no ma con tutto l'arsenale che ci becchiamo dagli americani una mezza cartuccia cosa vuoi che sia?!