Programming

26888 readers
3 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
26
27
 
 

When I built workdash, its goal and vision wasn't immediately clear to all people. So I thought it could be interesting for people that's don't immediately see it to read the principles and frustrations that made workdash what it is.

On my daily life I maintain and contribute to multiple opensource projects and work repositories, which means I constantly juggle git repos, issue trackers, CI, reviews, terminals, coding agents, logs, and random operational tasks.

In the past I tried to keep up with all of this in many different ways: Emails, GitHub Notifications, Aggregators, the classic “I’ll just ignore everything, if it’s really important someone will ping me” mantra and even more odd automation I won’t share here because would only make your life worse and not better (no, really, don’t try to delegate your life organization to an agent).

None of those worked really for me, because most developer dashboards try to solve this by replacing your workflow with their workflow.

Your editor becomes secondary. Your terminal disappears. Git becomes a button inside somebody else’s UI. Eventually the dashboard becomes the place where everything must to happen.

The question I found asking myself most frequently was "why can't I just open a shell here?"

This is more or less the story of how that question lead me to create "one more tool" and how maybe someone else will find it useful too.

28
 
 

Hey everyone,

I recently open-sourced OpenOSINT, a Python-based CLI framework designed to automate reconnaissance and threat intelligence workflows.

The architectural problem: Traditional OSINT automation usually relies on rigid bash scripts or static Python pipelines. If a tool fails, or if a specific finding requires a sudden pivot (e.g., finding an unexpected subdomain and needing to run a specific vulnerability check on it), a static pipeline simply breaks or requires massive if/else chains.

The approach: To solve this, I built an orchestrator leveraging the native tool-use/function calling APIs from Anthropic and OpenAI.

Here is how it works under the hood:

  • Dynamic Orchestration: You provide a target (IP, domain, email) and a query. The LLM acts purely as a reasoning engine.
  • Tool Registration: Local OSINT scripts are mapped as available tools. The framework reads the Python functions, parses docstrings and type hints, and feeds them to the LLM as an array of available actions.
  • Execution Loop: The LLM decides which tools to call, in what order, and dynamically pipes the structured output of one tool as the input parameter for the next one.
  • Modularity: Adding a new capability is plug-and-play. You just drop a new Python script into the modules directory, and the agent automatically knows it exists and how to use it based on the schema.

It's strictly CLI-native and outputs structured reports.

You can check out the code and the CLI demo here: https://github.com/OpenOSINT/OpenOSINT

I'm looking for some technical feedback on the codebase. Specifically, I'd love to hear your thoughts on how to better optimize the context window limits when dealing with massive raw outputs (like huge DNS dumps or nmap scans) before feeding them back into the LLM's memory.

Any architectural critiques or suggestions are welcome!

29
30
 
 

Anyone have recommendation for how I can scrap a website, and extract unique names -- such as product names.

I was thinking of using some website scrapping tool, then a local LLM to find unique product names.

31
32
 
 

I wrote a blog post about me getting in to FSharp web application development, but am having issues deciding how I want the data access to look. I'm very much open to feedback!

33
34
35
36
 
 

.

37
38
39
 
 

Crossposted from https://lemmy.ml/post/46897859

It is not my project.

I was looking for a lite version of Zed IDE without AI integrations, collab feature, telemetry etc and suddenly found it ^_^

I didn't test it excessively yet, but definitely give a try.

If you already tried it, please share your opinions.

40
41
42
 
 

cross-posted from: https://lemmy.ml/post/46884793

The Khronos OpenCL Working Group has released OpenCL 3.1, bringing widely deployed, field-proven capabilities into the core specification to expand functionality, including SPIR-V ingestion, that developers will be able to rely on across conformant implementations.

Features now mandated by OpenCL 3.1 have been deployed as extensions or optional capabilities. This is by design. The OpenCL working group evolves the specification by proving features in the field as extensions first, watching how they get used across multiple implementations, refining them based on developer feedback, and only then graduating them into the core specification.

Every conformant OpenCL 3.1 implementation will be required to consume SPIR-V kernels — a feature that has been one of the most requested by developers. OpenCL 3.1 additionally requires support for the SPIR-V query extension, which enables applications to enumerate the SPIR-V capabilities, extensions, and versions that a device supports, simplifying the adoption of new SPIR-V features as they become available.

Several features essential to HPC and AI kernels are also now mandatory in the core OpenCL 3.1 specification:

  • Subgroups, including shuffles, rotations, and an expanded set of supported data types. A fundamental building block for tuned reductions, scans, and matrix kernels.
  • Integer dot products, including saturating and accumulating variants, together with extended bit operations: Both map directly to dedicated hardware instructions on a wide range of modern silicon, and both are common building blocks for matrix multiplications and the low-precision arithmetic central to inference workloads.
  • A new query for the suggested local work-group size. This gives applications and profilers a runtime hint for the optimal work-group size for a given kernel and device, eliminating the need for manual tuning or repeated size calculations across multiple enqueues and improving performance predictability on diverse hardware.
  • A standard device UUID query, matching Vulkan’s VkPhysicalDeviceIDProperties::deviceUUID. This allows applications to correlate the same physical device across APIs, which is essential for multi-device systems and for external memory-sharing scenarios that span OpenCL and Vulkan.
43
 
 

The Cooley-Tukey fast Fourier transform can compute discrete Fourier transforms efficiently for signals of highly-composite length, such as powers of 2. However, to compute signal lengths with large prime factors, an algorithm such as Bluestein's is needed to prevent the algorithm from degenerating to quadratic complexity.

Thanks to Bluestein's algorithm, FFTs of prime-length input sequences are "only" ~4-6x slower than nearby powers of 2. The clunkiness of Bluestein's makes fast Fourier transforms interesting: the steps to complete the algorithm is not a monotonic function of the problem size, shooting up and down depending on the factors of the input length.

Interestingly, I found that almost no signal processing knowledge is required to understand and derive Bluestein's algorithm; anyone with general computing knowledge can follow along and should be able to implement Bluestein's algorithm after reading this post.

44
45
46
47
48
49
50
view more: ‹ prev next ›