<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Methodox Technologies, Inc.</title>
    <description>The latest articles on DEV Community by Methodox Technologies, Inc. (@methodox).</description>
    <link>https://dev.to/methodox</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/methodox"/>
    <language>en</language>
    <item>
      <title>DevLog 20260426: Divooka Mandelbrot Benchmark – Putting Our Scripting Language to the Test</title>
      <dc:creator>Charles Zhang</dc:creator>
      <pubDate>Mon, 27 Apr 2026 03:31:48 +0000</pubDate>
      <link>https://dev.to/methodox/devlog-20260426-divookabenchmarkmandelbrot-putting-our-scripting-language-to-the-test-54jf</link>
      <guid>https://dev.to/methodox/devlog-20260426-divookabenchmarkmandelbrot-putting-our-scripting-language-to-the-test-54jf</guid>
      <description>&lt;p&gt;Today we released the first public version of &lt;a href="https://github.com/MethodoxTech/DivookaBenchmark_Mandelbrot" rel="noopener noreferrer"&gt;&lt;strong&gt;DivookaBenchmark_Mandelbrot&lt;/strong&gt;&lt;/a&gt; — a standardized benchmark suite built specifically to measure the real-world performance of Divooka against a wide range of languages and runtimes.&lt;/p&gt;

&lt;p&gt;The benchmark computes the Mandelbrot set at &lt;em&gt;2000 × 2000&lt;/em&gt; resolution with a maximum of 1000 iterations per pixel, then verifies correctness using a 32-bit checksum (target: &lt;code&gt;689833081&lt;/code&gt;). We run each implementation five times and record average time, min/max, peak memory, and distribution size.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Benchmark Matters
&lt;/h3&gt;

&lt;p&gt;One of our core beliefs is that &lt;strong&gt;a modern scripting language needs a "decent (high) performance"&lt;/strong&gt;. This benchmark was designed to prove that point in a reproducible way.&lt;/p&gt;

&lt;p&gt;It exercises heavy integer and floating-point math, tight loops, branches, and memory writes — exactly the kind of workload where interpretation overhead shows up quickly. This particular benchmark doesn't test function calls directly.&lt;/p&gt;

&lt;p&gt;The goal is not to produce the an optimized Mandelbrot renderer, but to compare a simple, fixed algorithm across different languages and runtimes, then use the result to understand where Divooka currently stands.&lt;/p&gt;

&lt;p&gt;We included implementations (or equivalent ports) for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;C++&lt;/li&gt;
&lt;li&gt;C# (.NET 10, including AoT/trimmed)&lt;/li&gt;
&lt;li&gt;JavaScript (multiple browsers + Node.js)&lt;/li&gt;
&lt;li&gt;Go, Java, Julia, Python 3.13, Ruby, Prolog, GNU Octave&lt;/li&gt;
&lt;li&gt;And of course, multiple execution modes of Divooka 0.75.2 (&lt;strong&gt;&lt;em&gt;compiled&lt;/em&gt;&lt;/strong&gt;, &lt;strong&gt;&lt;em&gt;Aviator&lt;/em&gt;&lt;/strong&gt;, &lt;strong&gt;&lt;em&gt;Neo Editor&lt;/em&gt;&lt;/strong&gt;, &lt;strong&gt;&lt;em&gt;Stewer&lt;/em&gt;&lt;/strong&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We even tested AI-generated versions from ChatGPT, Gemini, Grok, and Kimi to see how reliable and performant LLM-produced result is in practice.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Results (on Windows 11, i7-13700KF)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fastest overall&lt;/strong&gt;: JavaScript in Edge/Chrome/Brave (~1,296 ms, very low memory)😲&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fastest native&lt;/strong&gt;: C++ (~1,308 ms)👍&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strong managed contender&lt;/strong&gt;: C# AoT (~1,355 ms)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Divooka compiled&lt;/strong&gt;: 3,639 ms — comfortably beating pure interpreters like Python (~47 s) and Ruby (~49 s) ⚡&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Divooka Aviator&lt;/strong&gt;: 6,711 ms 🥰&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Divooka Neo Editor&lt;/strong&gt;: significantly higher overhead (estimated multi-hour range for full runs) 🥹&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The gap between Divooka's compiled mode and the top-tier JITs (especially &lt;strong&gt;V8&lt;/strong&gt;) is still noticeable, but the improvement over classic interpreted scripting languages is clear. This reinforces why we’ve been investing heavily in our compilation pipeline and runtime optimizations.&lt;/p&gt;

&lt;p&gt;AI-generated implementations varied wildly — some produced correct checksums but took 26 seconds to several minutes, while others failed entirely or fell back to calling Python under the hood. &lt;strong&gt;Deterministic engineering still wins.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What We Learned
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;JIT is not optional for competitive scripting performance.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Tooling and editor overhead can dominate if not carefully managed (something we're actively addressing).&lt;/li&gt;
&lt;li&gt;Even simple, well-defined tasks like Mandelbrot &lt;em&gt;reveal meaningful differences in language/runtime design&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The full repository is now public:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/MethodoxTech/DivookaBenchmark_Mandelbrot" rel="noopener noreferrer"&gt;https://github.com/MethodoxTech/DivookaBenchmark_Mandelbrot&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It includes all source implementations, PowerShell automation scripts, result tables, and artifacts. Feel free to clone, run it yourself, or contribute additional language ports.&lt;/p&gt;

&lt;p&gt;This benchmark will become part of our regular performance regression suite as Divooka evolves. Future runs should show steady gains as we refine the compiler and reduce runtime overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Current Results
&lt;/h2&gt;

&lt;p&gt;Results from best runs of respective platform:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Rank&lt;/th&gt;
&lt;th&gt;Language / Runtime&lt;/th&gt;
&lt;th&gt;Size&lt;/th&gt;
&lt;th&gt;Best Time&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;JavaScript Web, Edge / Chrome / Brave&lt;/td&gt;
&lt;td&gt;3 KB&lt;/td&gt;
&lt;td&gt;1295.80–1302.80 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;C++&lt;/td&gt;
&lt;td&gt;15 KB&lt;/td&gt;
&lt;td&gt;1308.00 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;C# AoT&lt;/td&gt;
&lt;td&gt;911 KB&lt;/td&gt;
&lt;td&gt;1355.35 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;C# .NET 10&lt;/td&gt;
&lt;td&gt;16.5 MB&lt;/td&gt;
&lt;td&gt;1357.47 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;JavaScript Node.js&lt;/td&gt;
&lt;td&gt;2 KB + 98.2 MB&lt;/td&gt;
&lt;td&gt;1406.20 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;Go&lt;/td&gt;
&lt;td&gt;2 KB + 224 MB&lt;/td&gt;
&lt;td&gt;1425.01 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;Julia&lt;/td&gt;
&lt;td&gt;2 KB + 1 GB&lt;/td&gt;
&lt;td&gt;1511.20 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;JavaScript Web, Firefox&lt;/td&gt;
&lt;td&gt;3 KB&lt;/td&gt;
&lt;td&gt;1513.00 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;Java OpenJDK 17.0.11&lt;/td&gt;
&lt;td&gt;301 MB + small files&lt;/td&gt;
&lt;td&gt;1587.79 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;Divooka 0.75.2, Compiled&lt;/td&gt;
&lt;td&gt;71.914 MB&lt;/td&gt;
&lt;td&gt;3639.24 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;Divooka 0.75.2, Aviator Run&lt;/td&gt;
&lt;td&gt;8 KB + 2.16 GB + 776 MB&lt;/td&gt;
&lt;td&gt;6710.84 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;Python 3.13.5&lt;/td&gt;
&lt;td&gt;2 KB + 139 MB&lt;/td&gt;
&lt;td&gt;47127.70 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;td&gt;Ruby 3.2.2&lt;/td&gt;
&lt;td&gt;1 KB + 907 MB&lt;/td&gt;
&lt;td&gt;49489.82 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;Prolog SWI, optimized&lt;/td&gt;
&lt;td&gt;2 KB + 42.8 MB&lt;/td&gt;
&lt;td&gt;314687.84 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;td&gt;GNU Octave 7.3.0&lt;/td&gt;
&lt;td&gt;2 KB + 2.07 GB&lt;/td&gt;
&lt;td&gt;2464913.29 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;Divooka 0.75.2, Neo Editor estimate&lt;/td&gt;
&lt;td&gt;8 KB + 2.67 GB&lt;/td&gt;
&lt;td&gt;~95.8 hr&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Limitations
&lt;/h2&gt;

&lt;p&gt;This benchmark only tests a narrow case.&lt;/p&gt;

&lt;p&gt;It does not test UI programming, data manipulation, graph execution, external libraries, async workloads, recursion-heavy workloads, or large real-world applications.&lt;/p&gt;

&lt;p&gt;It also does not fully normalize warm-up behavior. Browser JavaScript likely benefits from highly optimized JIT behavior, and other runtimes may have different startup / warm-up costs.&lt;/p&gt;

&lt;p&gt;GNU Octave is probably optimized for matrix-oriented workloads, not naive scalar loops.&lt;/p&gt;

&lt;p&gt;Python is rarely used this way in performance-sensitive numerical code; normally people use NumPy, Numba, Cython, PyPy, or native extensions.&lt;/p&gt;

</description>
      <category>divooka</category>
      <category>benchmark</category>
      <category>programming</category>
      <category>performance</category>
    </item>
    <item>
      <title>English is The Worst Programming Language</title>
      <dc:creator>Charles Zhang</dc:creator>
      <pubDate>Sun, 26 Apr 2026 03:47:34 +0000</pubDate>
      <link>https://dev.to/methodox/english-is-the-worst-programming-language-35oi</link>
      <guid>https://dev.to/methodox/english-is-the-worst-programming-language-35oi</guid>
      <description>&lt;p&gt;In the world of software engineering, we obsess over programming languages: their syntax, their type systems, their performance characteristics, their paradigms. We debate Rust versus Go, Python's readability versus C++'s control, Haskell's purity versus JavaScript's chaos. Yet we rarely acknowledge the elephant in the room — or rather, the language we all use every day that makes every other language look like a model of crystalline precision: &lt;strong&gt;English&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;English is, without question, the worst programming language ever created. It is not merely flawed; it is fundamentally unsuited for the reliable transmission of complex ideas. Anyone who has ever tried to communicate a non-trivial technical concept to another human being knows this instinctively. The ambiguities, the contextual dependencies, the endless opportunities for misinterpretation — English excels at all of them.&lt;/p&gt;

&lt;p&gt;If you have spent any time in meetings, code reviews, specification documents, or even casual Slack threads, you have felt the pain. And if you have any background in mathematics, physics, or formal logic, the contrast becomes almost comically stark.&lt;/p&gt;

&lt;h3&gt;
  
  
  Precision: The Foundation of Reliable Computation (and Communication)
&lt;/h3&gt;

&lt;p&gt;At its core, a good programming language is a formal system designed for &lt;em&gt;unambiguous&lt;/em&gt; execution. Every valid program has exactly one meaning under the language's semantics. A compiler or interpreter will produce the same behavior every time, assuming the same inputs and environment. This is not an accident; it is the entire point.&lt;/p&gt;

&lt;p&gt;Mathematics and physics adopted formal languages for the same reason. When Newton or Leibniz developed calculus, they didn't just wave their hands and say "the rate of change, you know, kinda like speed but for curves." They created notation — symbols, rules of manipulation, axioms — that allowed precise statements like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09vg4fmhxnhwq3ko9fnv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09vg4fmhxnhwq3ko9fnv.png" alt="Derivative Example" width="165" height="77"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This expression has one, and only one, interpretation among those who understand the formal system. There is no room for "well, depending on what you mean by &lt;em&gt;derivative&lt;/em&gt;" or "in my experience, sometimes it's 2x plus or minus a bit."&lt;/p&gt;

&lt;p&gt;Physics follows suit. Maxwell's equations in differential form are not poetic suggestions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fije9ywlr9drmgbsihpsq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fije9ywlr9drmgbsihpsq.png" alt="Maxwell Equation" width="596" height="77"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These are not open to "opinions-based" interpretation. Misunderstand a sign or a vector component, and your electromagnetic device will quite literally fail to function. Engineers and physicists spend years learning to think within these formalisms precisely because natural language is too sloppy for the phenomena they describe.&lt;/p&gt;

&lt;p&gt;Formal languages exist because human natural languages are &lt;em&gt;terrible&lt;/em&gt; at edge cases, scope, and unintended interpretations. We invented predicate logic, set theory, lambda calculus, and programming languages to escape the swamp of ambiguity.&lt;/p&gt;

&lt;p&gt;English, by contrast, is the swamp.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Many Ways English Fails as a Specification Language
&lt;/h3&gt;

&lt;p&gt;Consider something as seemingly simple as a requirement: "The system shall process transactions quickly."&lt;/p&gt;

&lt;p&gt;What does "quickly" mean? Under 100ms? Under 500ms? Does it depend on load? On hardware? On network conditions? Is this a hard guarantee or a soft target? English offers no mechanism to distinguish these.&lt;/p&gt;

&lt;p&gt;Now imagine trying to implement a sorting algorithm based on an English description:&lt;/p&gt;

&lt;p&gt;"Sort the list in ascending order, but put the most important items first, unless they're duplicates, in which case keep the original order for those."&lt;/p&gt;

&lt;p&gt;Good luck. Is "important" defined? What constitutes a duplicate? Does "original order" refer to stable sort semantics? English leaves all of this to intuition, shared context, and — inevitably — arguments in code review.&lt;/p&gt;

&lt;p&gt;Real-world examples abound in software projects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Handle errors gracefully" — Does this mean log them? Retry? Notify the user? Crash the process? Return a default value?&lt;/li&gt;
&lt;li&gt;"Make the UI intuitive" — Intuitive to whom? A power user? A complete novice? Someone from a different cultural background?&lt;/li&gt;
&lt;li&gt;"The cache should be consistent" — Eventual consistency? Strong consistency? What about partition tolerance?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not edge cases. They are the norm. Experienced engineers learn to translate vague English requirements into precise specifications, often using formal tools (user stories with acceptance criteria, UML, TLA+, property-based testing) precisely because English alone is insufficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ambiguity at Every Level
&lt;/h3&gt;

&lt;p&gt;English is ambiguous at the lexical, syntactic, semantic, and pragmatic levels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lexical ambiguity&lt;/strong&gt; (multiple meanings for the same word): "Bank" can mean a financial institution, the side of a river, or to rely on something. Context usually helps, but in technical documents, context can itself be contested.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Syntactic ambiguity&lt;/strong&gt; (parsing issues): Classic example — "I saw the man with the telescope." Did I use the telescope to see the man, or did the man have the telescope? Programming languages avoid this with strict grammar rules and operator precedence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Semantic ambiguity&lt;/strong&gt;: "The function should return the largest value." Largest by what metric? In what ordering? For floating-point numbers, what about NaN or signed zeros?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pragmatic ambiguity&lt;/strong&gt;: What is left unsaid. In a conversation between colleagues who have worked together for years, massive amounts of information are implied. New team members, or an AI trying to implement from a spec, have no access to that shared mental model.&lt;/p&gt;

&lt;p&gt;Compare this to Python, often praised for readability:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;process_data&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;float&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;float&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even without types, the behavior is unambiguous to anyone who knows Python's semantics. &lt;code&gt;max([])&lt;/code&gt; raises an exception, but here we explicitly handle the empty case. The return type hint further constrains expectations.&lt;/p&gt;

&lt;p&gt;English has no equivalent of type hints, no compiler to catch inconsistencies, no unit tests for prose.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why We Put Up With It (And Why We Shouldn't)
&lt;/h3&gt;

&lt;p&gt;English has undeniable strengths. It is expressive, flexible, and culturally rich. It can convey emotion, nuance, humor, and poetry in ways that formal languages cannot. Shakespeare didn't write in lambda calculus for a reason.&lt;/p&gt;

&lt;p&gt;But when the goal is &lt;em&gt;reliable transmission of executable ideas&lt;/em&gt; — specifications, APIs, requirements, designs — those strengths become liabilities. Flexibility is another word for underspecification.&lt;/p&gt;

&lt;p&gt;Anyone with real experience communicating complex ideas knows the pattern:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write what seems like a clear English description.&lt;/li&gt;
&lt;li&gt;Send it to a colleague.&lt;/li&gt;
&lt;li&gt;Receive back an implementation that is technically correct but completely misses the intent.&lt;/li&gt;
&lt;li&gt;Spend hours in discussion clarifying what was "obviously" meant.&lt;/li&gt;
&lt;li&gt;Repeat.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is why mature software organizations invest heavily in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Precise requirements documents with measurable acceptance criteria&lt;/li&gt;
&lt;li&gt;Formal methods (where the cost justifies it — e.g., aerospace, finance, medical devices)&lt;/li&gt;
&lt;li&gt;Contract testing, property-based testing, and exhaustive specification&lt;/li&gt;
&lt;li&gt;Code as the single source of truth, with comments used sparingly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The very existence of these practices is an admission that English is inadequate.&lt;/p&gt;

&lt;p&gt;In physics and mathematics, we don't tolerate "it's approximately true, in the usual sense." We define terms rigorously. We prove theorems. We build models whose predictions can be falsified experimentally. The precision of the language enables the precision of thought and the reliability of results.&lt;/p&gt;

&lt;p&gt;Programming aspires to the same rigor. That's why we have static type systems, formal verification tools like Coq or Lean, and languages with strong guarantees (memory safety, data-race freedom, etc.).&lt;/p&gt;

&lt;p&gt;English remains the worst because it resists all such discipline. It evolves organically, embraces exceptions, and thrives on interpretation. It is a living, messy, human thing — wonderful for literature, conversation, and persuasion; disastrous for programming.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Humble Remark
&lt;/h3&gt;

&lt;p&gt;We don't need to abandon English. We need to recognize its limitations and use better tools where precision matters.&lt;/p&gt;

&lt;p&gt;Next time you write a technical specification, ask yourself: Could this be misinterpreted in a way that leads to incorrect behavior? If the answer is yes (and it almost always is), add precision — through examples, edge cases, invariants, or even pseudocode.&lt;/p&gt;

&lt;p&gt;Treat English as a high-level, lossy compilation target rather than the final executable specification.&lt;/p&gt;

&lt;p&gt;Because in the end, the computer doesn't care about your opinions, your intent, or what "everyone knows" you meant. It executes precisely what you told it — in a real programming language.&lt;/p&gt;

&lt;p&gt;And humans, despite our intelligence, are not much better than computers when the specification is written in the worst programming language of all: English. &lt;/p&gt;

&lt;p&gt;We just hide our bugs better, and argue about them longer.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>english</category>
    </item>
    <item>
      <title>DevLog 20260319: Metaprogramming (Teaser)</title>
      <dc:creator>Charles Zhang</dc:creator>
      <pubDate>Thu, 19 Mar 2026 20:16:19 +0000</pubDate>
      <link>https://dev.to/methodox/devlog-20260319-metaprogramming-teaser-4pn9</link>
      <guid>https://dev.to/methodox/devlog-20260319-metaprogramming-teaser-4pn9</guid>
      <description>&lt;h2&gt;
  
  
  Quick Comment
&lt;/h2&gt;

&lt;p&gt;Following &lt;a href="https://dev.to/methodox/devlog-20260319-towards-oop-document-level-main-module-and-module-level-members-and-behaviors-133b"&gt;Document-Level Main Module&lt;/a&gt;, the possibility really opens up...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzpyyqm8nrqta08anl9pf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzpyyqm8nrqta08anl9pf.png" alt="Proper Metaprogramming in Divooka" width="634" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is very different from our earlier hard-coded "Stats" node.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3eo3c5au5cwonf2pquz7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3eo3c5au5cwonf2pquz7.png" alt="A Hardcoded Graph Stats (Summary) Node" width="434" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>metaprogramming</category>
      <category>advanced</category>
      <category>divooka</category>
    </item>
    <item>
      <title>DevLog 20260319: Towards OOP - Document-Level Default Module and Module-Level Members and Behaviors</title>
      <dc:creator>Charles Zhang</dc:creator>
      <pubDate>Thu, 19 Mar 2026 20:16:08 +0000</pubDate>
      <link>https://dev.to/methodox/devlog-20260319-towards-oop-document-level-main-module-and-module-level-members-and-behaviors-133b</link>
      <guid>https://dev.to/methodox/devlog-20260319-towards-oop-document-level-main-module-and-module-level-members-and-behaviors-133b</guid>
      <description>&lt;h2&gt;
  
  
  Revisiting Flappy Bird – The Pain Point
&lt;/h2&gt;

&lt;p&gt;Remember our Flappy Bird example from mid-2025?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F509yykxr054pazr7crm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F509yykxr054pazr7crm6.png" alt="The Initialization Sequence for Flappy Bird Game in Divooka Interactive Demo" width="800" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The long chain of variable initializations was needed for two main reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;No way to declare variables before using them
&lt;/li&gt;
&lt;li&gt;More critically - &lt;strong&gt;no way to define custom data structures&lt;/strong&gt; that could group related values together and be instantiated with a single node.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In retrospective it seems obvious: we just need proper structured data containers. But building the scaffolding to support this properly in a visual, node-based system is a bit more involved.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc044pu9pmmgtppisnts7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc044pu9pmmgtppisnts7.png" alt="Initialize State with A Single Node with Structured Data" width="784" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Searching for a Solution
&lt;/h2&gt;

&lt;p&gt;I spent a long time trying to bring OOP concepts into Divooka. A huge amount of effort went into DiOS (Divooka Open Standards), with the hope that a complete standard would naturally solve these kinds of problems. But standardization work is slow and deep - just like making Divooka itself feature-complete.&lt;/p&gt;

&lt;p&gt;Then, while working on something completely unrelated and taking a short break from this topic, it suddenly clicked.&lt;/p&gt;

&lt;p&gt;The key insight: &lt;strong&gt;start simple and approach it from a functional/declarative angle first&lt;/strong&gt;, before jumping into inheritance, encapsulation, polymorphism, and all those good stuff.&lt;/p&gt;

&lt;p&gt;I'm really excited - we finally have meaningful progress.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Divooka Way
&lt;/h2&gt;

&lt;p&gt;To claim our goal as "everything is a node" is a bit too strong, but I strongly prefer keeping almost everything on the graph.&lt;/p&gt;

&lt;p&gt;This aligns with how we already handle &lt;a href="https://dev.to/methodox/devlog-20260225-divooka-visual-programming-direct-value-setting-of-compound-inputs-4dej"&gt;structured primitives&lt;/a&gt; - direct value setting of compound inputs. That way, there's still hope that users can drop down to raw node programming when they need full control (and thus keep everything programmable).&lt;/p&gt;

&lt;p&gt;Traditional property panels (like Unreal Blueprints' object definition GUIs) are out - they break the pure graph paradigm.&lt;/p&gt;

&lt;p&gt;Instead, the solution is declarative, similar to how TerraGen works: &lt;strong&gt;the mere presence and connection of certain nodes defines meaning&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdt0uo32f876mkby6xfce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdt0uo32f876mkby6xfce.png" alt="Custom Module Data Member Definition in Divooka" width="709" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5rcgs9pioy63whstj6r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5rcgs9pioy63whstj6r.png" alt="Declarative Programming in Divooka in the Context of Module Member Definitions" width="287" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This feels &lt;strong&gt;awesome&lt;/strong&gt; for several reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We can make full use of the 2D graph layout - place things exactly where we want them visually
&lt;/li&gt;
&lt;li&gt;It stays completely coherent with Divooka's graphical paradigm
&lt;/li&gt;
&lt;li&gt;No need for separate "class editor" UIs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's Been Implemented So Far
&lt;/h2&gt;

&lt;p&gt;Current working pieces:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Graphs can now act as &lt;strong&gt;modules&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Support for &lt;strong&gt;module-level data members&lt;/strong&gt; and &lt;strong&gt;behavior members&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;main module&lt;/strong&gt; tied to the document, with existing graphs serving as associated behaviors&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Module-scoped instance node – in Events context&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fuz1d4xkpggewvio85i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fuz1d4xkpggewvio85i.png" alt="Module Scoped Instance Node in Divooka - Events Context" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Module-scoped instance node – in Dataflow context&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3it8f0cg1weluxbjb9uq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3it8f0cg1weluxbjb9uq.png" alt="Module Scoped Instance Node in Divooka - Dataflow Context" width="565" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Top-level module behavior graphs&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54bhw9ol26pp6265hjri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54bhw9ol26pp6265hjri.png" alt="Top Level Module Behavior Member Graphs in Divooka" width="776" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This immediately unlocks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clean access to (top-level) module instances
&lt;/li&gt;
&lt;li&gt;Shared event and dataflow instance data
&lt;/li&gt;
&lt;li&gt;A unified programming and document model for subgraphs (still a work-in-progress) &lt;/li&gt;
&lt;li&gt;In procedural contexts (&lt;strong&gt;Glaze!&lt;/strong&gt; especially): we now have proper data containers - no more awkward giant variable lists or forced tuples! (Compare to the old Flappy Bird screenshot)
&lt;/li&gt;
&lt;li&gt;Much simpler and more natural state storage/access in &lt;strong&gt;Glaze!&lt;/strong&gt; and procedural graphs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is a big deal - for a system that's gearing up for real production use, it feels like crossing a major milestone in usefulness and core abstractions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Current Limitations &amp;amp; Remaining Work
&lt;/h2&gt;

&lt;p&gt;Still plenty to fix/polish:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A bunch of bugs and edge cases
&lt;/li&gt;
&lt;li&gt;Proper serialization support
&lt;/li&gt;
&lt;li&gt;Smooth context switching between graphs
&lt;/li&gt;
&lt;li&gt;Subgraph consolidation / better nesting UX
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Looking Ahead
&lt;/h2&gt;

&lt;p&gt;This is a solid foundation, but there's still a long road:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Full modularization support
&lt;/li&gt;
&lt;li&gt;Modules inside modules (nesting)
&lt;/li&gt;
&lt;li&gt;Module inheritance / composition patterns / polymorphism&lt;/li&gt;
&lt;li&gt;Base DOM exposed as the "main module" for metaprogramming API
&lt;/li&gt;
&lt;li&gt;Detailed documentation: how it all fits together, recommended usage patterns, expected behaviors, pitfalls
&lt;/li&gt;
&lt;li&gt;Module-level scoped instances (deeper/finer control)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall - I am very happy with the direction. It feels like Divooka is finally getting the abstraction power it deserves while staying true to its visual, graph-first philosophy.&lt;/p&gt;

&lt;p&gt;I am hyped for what we will achieve.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://wiki.methodox.io/en/Frameworks/Glaze/Tutorials/FlappyBird" rel="noopener noreferrer"&gt;The original flappy bird example&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/methodox/devlog-20250711-flappy-bird-in-divooka-sneak-peak-40j8"&gt;DevLog on flappy bird example&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>divooka</category>
      <category>oop</category>
      <category>devlog</category>
      <category>programming</category>
    </item>
    <item>
      <title>A Detective Story: The Case of The Dead Office Guy (Upcoming Game Release)</title>
      <dc:creator>Charles Zhang</dc:creator>
      <pubDate>Fri, 06 Mar 2026 19:29:11 +0000</pubDate>
      <link>https://dev.to/methodox/a-detective-story-the-case-of-the-dead-office-guy-upcoming-game-release-5529</link>
      <guid>https://dev.to/methodox/a-detective-story-the-case-of-the-dead-office-guy-upcoming-game-release-5529</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In the quiet town of Baytown, an unremarkable man lies dead under suspicious circumstances. Rumors ripple through the streets. Secrets long buried begin to surface. The whole community is buzzing - some in dread, others gripped by dark fascination.&lt;/p&gt;

&lt;p&gt;A Detective Story is a compact detective simulator that brings generative AI to the heart of interactive fiction.&lt;/p&gt;

&lt;p&gt;Step into the role of detective in this small-scale murder investigation. Interrogate a cast of AI-powered townsfolk, each with their own motives, fragmented memories, and the potential to lie or mislead. Explore the map to uncover physical evidence and hidden clues. Piece it all together before time runs out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Game Guide
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Your mission: Solve the murder as efficiently as possible - minimize days elapsed and questions spent.&lt;/li&gt;
&lt;li&gt;Bringing suspects to the station costs valuable time (days). Choose your interviews with care.&lt;/li&gt;
&lt;li&gt;These are informal interrogations - no formal oaths, no truth serum. Witnesses and suspects may deceive, omit, or hide the truth.&lt;/li&gt;
&lt;li&gt;You have only 3 questions per interview. Make every one count.&lt;/li&gt;
&lt;li&gt;Stay in character for maximum immersion: Ask about alibis, relationships, timelines, strange sightings, and contradictions.&lt;/li&gt;
&lt;li&gt;Cross-check stories, hunt inconsistencies, and link conversations to evidence found on the map.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Hardware Requirements
&lt;/h2&gt;

&lt;p&gt;A Detective Story is lightweight and runs smoothly on modest hardware. It requires at least 10 GB of system RAM (DRAM) - with no heavy VRAM demands.&lt;/p&gt;

</description>
      <category>gamedev</category>
      <category>game</category>
      <category>simulation</category>
      <category>mystery</category>
    </item>
    <item>
      <title>DevLog 20260225: Divooka Visual Programming - Direct Value Setting of Compound Inputs</title>
      <dc:creator>Charles Zhang</dc:creator>
      <pubDate>Wed, 25 Feb 2026 17:46:45 +0000</pubDate>
      <link>https://dev.to/methodox/devlog-20260225-divooka-visual-programming-direct-value-setting-of-compound-inputs-4dej</link>
      <guid>https://dev.to/methodox/devlog-20260225-divooka-visual-programming-direct-value-setting-of-compound-inputs-4dej</guid>
      <description>&lt;p&gt;Finally, it arrived! Well, not exactly here yet, but getting closer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uefmdr5tir0qjdcfyax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uefmdr5tir0qjdcfyax.png" alt="Screenshot of Visual Programming in Divooka - Value Assignment of Compound Inputs" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Divooka intends to make visual programming simple and accessible - the accessible part is achieved both through wide availability and the general-purpose nature of the programming system, but the &lt;em&gt;simple&lt;/em&gt; part is what we spend lots of effort thinking hard trying to improve.&lt;/p&gt;

&lt;p&gt;In the past, dealing with complex configuration takes the form of either &lt;em&gt;"explosion"&lt;/em&gt; of arguments or dedicated &lt;code&gt;Make XXX&lt;/code&gt; construction functions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmf6epcnl90nzwn4v3sni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmf6epcnl90nzwn4v3sni.png" alt="An Example of A Node with Lots of Inputs" width="559" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhaamwdqh83lph8wphcw4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhaamwdqh83lph8wphcw4.png" alt="An Example of A Node Taking a Struct, Which Needs to be Created from a Node with Lots of Inputs" width="763" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The problem arises due to the direct exposure of all input arguments in visual programming. Both approach makes the canvas look busy, with the former a common approach in existing systems - using a struct sounds good on paper but still requires a complex looking construction node. This is a common problem in visual programming systems, below shows how it's done in Blender and ComfyUI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp74kj1nn0v5clwya2bp4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp74kj1nn0v5clwya2bp4.png" alt="A Node with Many Inputs in ComfyUI" width="355" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jkcps43hasucbgc4fjr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jkcps43hasucbgc4fjr.png" alt="A Node with Many Inputs in Blender" width="505" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You do get used to it after a while, but a busy looking canvas remains a busy-looking canvas. In Blender, you can sort of "collapse" a node:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51ssein4u54ak1b4x2dp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51ssein4u54ak1b4x2dp.png" alt="Collapse Nodes in Blender" width="617" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As Blender is constantly improving, we can also now collapse property groups as shown in the earlier screenshot.&lt;/p&gt;

&lt;p&gt;As a syntax purist and (unfortunately) very conscious about my working space, my eyes can't tolerate messiness. I kind of like how Houdini approaches this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rcfmxoty5i6dt915ue9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rcfmxoty5i6dt915ue9.png" alt="Nodes in Houdini Are Very Simple and Clean" width="479" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25jwxr7qrhzxak2u6ylm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25jwxr7qrhzxak2u6ylm.png" alt="Houdini Makes Use of Property Panels" width="800" height="549"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Essentially they offload all the complexity to the properties panel - this has the huge benefit of making everything look very clean. But the downside is Houdini rely heavily on scripting expressions (like Excel).&lt;/p&gt;

&lt;p&gt;TerraGen and World Machine mixes the approach:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8j6701vpaacarx9406jj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8j6701vpaacarx9406jj.png" alt="A Node Graph in World Machine" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffabvq61w0sqm3oiybyxf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffabvq61w0sqm3oiybyxf.png" alt="A Node Graph in TerraGen" width="800" height="774"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In both cases, connections represent flow of terrain data only and additional configurations are offloaded to property panels. In TerraGen especially, this allows those properties to be animated. The downside, once again, is &lt;strong&gt;whatever is offloaded to property panels are no longer programmable using nodes&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our Approach
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uefmdr5tir0qjdcfyax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uefmdr5tir0qjdcfyax.png" alt="Compound Input Editing Popup in Divooka" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our proposed solution for Divooka looks like this: fundamentally all properties remain as node inputs, but compound values (especially value types) should just allow "editing in place" through a property panel. If more complex programming is needed, one can easily fall back to &lt;code&gt;Make XXX&lt;/code&gt; nodes again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future Work
&lt;/h2&gt;

&lt;p&gt;There is still remaining work on type check, hierarchical compound types, arrays, and serialization, and also node rendering.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As usual with all the exciting language features of Divooka we intend to introduce, there are lots of details to take care of here.&lt;/p&gt;

</description>
      <category>divooka</category>
      <category>programminglanguage</category>
      <category>devlog</category>
      <category>design</category>
    </item>
    <item>
      <title>DevLog 20260224 Divooka - Node-Level Automatic Dispatch (Runtime Execution Behavior)</title>
      <dc:creator>Charles Zhang</dc:creator>
      <pubDate>Tue, 24 Feb 2026 22:20:52 +0000</pubDate>
      <link>https://dev.to/methodox/devlog-20260224-divooka-node-level-automatic-dispatch-runtime-execution-behavior-2dih</link>
      <guid>https://dev.to/methodox/devlog-20260224-divooka-node-level-automatic-dispatch-runtime-execution-behavior-2dih</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0nmv1dzv1kuerx8vvr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0nmv1dzv1kuerx8vvr5.png" alt="Scalar Operation" width="800" height="190"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Anything useful done with a program eventually involves file I/O and some form of repetition.&lt;/p&gt;

&lt;p&gt;In Divooka, the dataflow context already makes it very easy to "just get a single thing done." That part feels good. But handling loops? That's where things start to get awkward.&lt;/p&gt;

&lt;p&gt;One challenge lies on the GUI side — especially around lambdas and subgraphs. We do have foundational support (as shown &lt;a href="https://dev.to/methodox/devlog-20250510-dealing-with-lambda-3ff9"&gt;here&lt;/a&gt; and &lt;a href="https://dev.to/methodox/progress-share-graph-local-lambda-calculus-4dj2"&gt;here&lt;/a&gt;), but let's be honest: it's not &lt;em&gt;smooth&lt;/em&gt; yet.&lt;/p&gt;

&lt;p&gt;So the question becomes: how do we make repetition feel natural without introducing heavy conceptual overhead?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Plan
&lt;/h2&gt;

&lt;p&gt;There are several well-established ways to achieve looping behavior in a functional or visual programming context:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Subgraph context&lt;/strong&gt; - Used in tools like Blender, vvvv, and Houdini. A specialized node group or frame defines loop entry and exit boundaries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lambda callbacks&lt;/strong&gt; - Evaluation is handled via callback-style execution.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Expressive recursion&lt;/strong&gt; - As seen in text-based functional languages, requiring explicit termination conditions to avoid stack overflow.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All of these approaches are powerful and necessary in certain contexts. But they also introduce additional constructs into the language model. For advanced workflows, that's fine. For simple operations, it can feel unnecessarily complex.&lt;/p&gt;

&lt;p&gt;I wanted something lighter.&lt;/p&gt;

&lt;h2&gt;
  
  
  Array Coercion
&lt;/h2&gt;

&lt;p&gt;Divooka already supports &lt;strong&gt;Array Coercion&lt;/strong&gt; — a scalar value can be passed into an input expecting a collection.&lt;/p&gt;

&lt;p&gt;This avoids the need to manually "wrap" a scalar into an array just to satisfy a function signature - I still remember the pain of handling arrays in Unreal Blueprint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2m2p8hs7ellxbfzgj5m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2m2p8hs7ellxbfzgj5m.png" alt="Treat Scalar as Array" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scalar-to-array coercion keeps graphs clean and avoids clustering.&lt;/p&gt;

&lt;p&gt;To take this to the next level, I've been giving it much thought on how the reverse might work: if we can coerce scalars into arrays — what if we also let nodes automatically dispatch when arrays are passed in?&lt;/p&gt;

&lt;h2&gt;
  
  
  Automatic Dispatch (Node-Level)
&lt;/h2&gt;

&lt;p&gt;The core design question was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Should an array input implicitly turn the &lt;em&gt;entire downstream chain&lt;/em&gt; into a loop?&lt;/li&gt;
&lt;li&gt;Or should the dispatch behavior stay localized to the node itself?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Eventually it seemed localizing things to the node made far more sense. &lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;string String.Replace(string, string, string)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If any input is provided as an array instead of a scalar, the node automatically promotes itself to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;string[] String.Replace(string[], string[], string[])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The node executes element-wise.&lt;/p&gt;

&lt;p&gt;Now, I can rename a folder of files with roughly three nodes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enumerate files&lt;/li&gt;
&lt;li&gt;Regex replace&lt;/li&gt;
&lt;li&gt;Rename file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No explicit loop node. No subgraph. No lambdas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bvz2dbw56ctqp2kju5n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bvz2dbw56ctqp2kju5n.png" alt="Dispatch Setup" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example, &lt;code&gt;Duplicate as Array&lt;/code&gt; returns a strongly typed &lt;code&gt;string[]&lt;/code&gt;, enabled by &lt;a href="https://dev.to/methodox/devlog-20250710-generics-in-divooka-49e6"&gt;generics support&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;One important rule: All array inputs must align in size with the source array. The dispatch is index-based and deterministic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Full Example
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsypgdt3xrj6g4phkvij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsypgdt3xrj6g4phkvij.png" alt="Full Example" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What used to require explicit iteration logic now becomes implicit behavior at the node boundary. The graph remains readable. The mental model stays simple.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This is still early work. Integration with the rest of Divooka's features is ongoing, and edge cases are being worked through.&lt;/p&gt;

&lt;p&gt;So far so good.&lt;/p&gt;

</description>
      <category>divooka</category>
      <category>methodox</category>
      <category>visualprogramming</category>
      <category>programminglanguage</category>
    </item>
    <item>
      <title>Weekly Update 2026-01-11</title>
      <dc:creator>Charles Zhang</dc:creator>
      <pubDate>Mon, 12 Jan 2026 01:29:00 +0000</pubDate>
      <link>https://dev.to/methodox/weekly-update-2026-01-11-1j60</link>
      <guid>https://dev.to/methodox/weekly-update-2026-01-11-1j60</guid>
      <description>&lt;p&gt;This week was mostly about tightening up the foundations of our tools rather than pushing new features. A lot of our older experiments were starting to show their age, so we spent time cleaning things up and getting the core systems into better shape.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Happened This Week
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Refactoring Neo + Parcel NExT review&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We continued working through refactors on Neo and reviewing the Parcel NExT implementation. Some of the earlier experimental frontend work had spread across multiple assemblies, so small API changes ripple everywhere. Slow, but necessary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WPF + Nodify wrestling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A surprising amount of time went into understanding how WPF resource dictionaries, theming, and Nodify's API actually want to behave together. After digging through the quirks, we finally have a clearer picture of how to keep styling and view models consistent. Didn't get as far into custom styling as we hoped, but at least the fundamentals are sorted out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Serialization groundwork&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Started wiring up the basics for text-based JSON serialization of Divooka graphs/documents. Also began work on package reference records and figuring out how packages get identified and loaded at runtime. All early steps, but important ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Graph editor + GUI cleanup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Made some initial passes at reorganizing the graph editor. Small changes now, but they'll help us build better tools on top later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fantasy Planet Painter update&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Spent a little time polishing and updating the Steam submission for Fantasy Planet Painter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Methodox Threads&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We also pushed out an early public release of &lt;strong&gt;Methodox Threads&lt;/strong&gt;, our lightweight branching text environment for managing non-linear AI conversations. The new v0.7 build adds integrated Gen-AI generation directly inside each document pane, configurable providers, and a simple JSON-backed structure for exporting or versioning thought trees. It's essentially a cleaner way to explore tangents, run parallel prompts, and keep complex LLM work organized. Download is available on &lt;a href="https://methodox.itch.io/threads" rel="noopener noreferrer"&gt;itch.io&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Closing Thoughts
&lt;/h3&gt;

&lt;p&gt;Not a "big features" week, but a surprisingly productive one in terms of future stability. Refactors aren't glamorous, but they make everything that comes next a lot smoother.&lt;/p&gt;

</description>
      <category>divooka</category>
      <category>methodox</category>
      <category>neo</category>
      <category>parcelnext</category>
    </item>
    <item>
      <title>Release Note: Methodox Threads (v0.7)</title>
      <dc:creator>Charles Zhang</dc:creator>
      <pubDate>Mon, 12 Jan 2026 01:14:34 +0000</pubDate>
      <link>https://dev.to/methodox/release-note-methodox-threads-v07-15a3</link>
      <guid>https://dev.to/methodox/release-note-methodox-threads-v07-15a3</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Version 0.7 introduces full Gen-AI generation capabilities, including per-document prompt-driven content generation, provider configuration, OpenAI integration, and asynchronous multi-editor execution with UI-level busy indicators. This release establishes the foundation for a extendable, multi-provider LLM workflow while maintaining the existing document layout and editing model.&lt;/p&gt;

&lt;h2&gt;
  
  
  New Features in v0.7
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Configurable AI Provider Framework&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A new &lt;strong&gt;Configure…&lt;/strong&gt; dialog provides a unified interface for system-level and provider-specific settings:&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;System Tab&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Edit the global &lt;em&gt;System Prompt&lt;/em&gt; used for all generations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;OpenAI Tab&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;API key (masked)&lt;/li&gt;
&lt;li&gt;Optional custom endpoint&lt;/li&gt;
&lt;li&gt;Preset model list:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;gpt-4o-mini&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;gpt-4o&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;o3-mini&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Other&lt;/code&gt; → reveals a custom model name field&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Support for model overrides when presets become outdated&lt;/li&gt;

&lt;li&gt;Automatic load/save of configuration in a user-specific app directory&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Additional provider tabs (Gemini, DeepSeek, Ollama, Grok) are included as placeholders for future integrations.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Gen-AI Generation Workflow&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Each document now supports prompt-based content generation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Users set a &lt;strong&gt;Prompt&lt;/strong&gt; on any document.&lt;/li&gt;
&lt;li&gt;Selecting &lt;strong&gt;Edit → Generate&lt;/strong&gt; triggers generation for the focused document.&lt;/li&gt;
&lt;li&gt;Generation uses:

&lt;ul&gt;
&lt;li&gt;Global System Prompt&lt;/li&gt;
&lt;li&gt;Document Prompt&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Per-Editor Async Generation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each document editor generates independently in parallel.&lt;/li&gt;
&lt;li&gt;Editors become temporarily read-only during generation.&lt;/li&gt;
&lt;li&gt;A semi-transparent overlay displays &lt;em&gt;Generating…&lt;/em&gt; with an indeterminate progress bar.&lt;/li&gt;
&lt;li&gt;Sibling/Child creation buttons remain active.&lt;/li&gt;
&lt;li&gt;Generated text is written directly into the document’s &lt;code&gt;Content&lt;/code&gt; field.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;OpenAI Integration (First Provider Implementation)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A new abstraction layer encapsulates provider calls.&lt;br&gt;
Version 0.7 includes the first concrete backend:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OpenAI Chat Completion Backend&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses the official OpenAI SDK.&lt;/li&gt;
&lt;li&gt;Supports both default and custom endpoints.&lt;/li&gt;
&lt;li&gt;Converts internal document structures into Chat API messages.&lt;/li&gt;
&lt;li&gt;Returns full assistant text as document content.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This abstraction enables drop-in integration of additional providers in future versions.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Configuration Persistence&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;All provider and system settings are automatically stored as JSON in the user-local app directory:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Loaded when opening the Configure dialog&lt;/li&gt;
&lt;li&gt;Saved on dialog close&lt;/li&gt;
&lt;li&gt;Ensures persistent environment across editor sessions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Limitations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;No document deletion or rearrangement&lt;/li&gt;
&lt;li&gt;Markdown preview remains basic&lt;/li&gt;
&lt;li&gt;Only OpenAI is implemented; other providers are placeholders&lt;/li&gt;
&lt;li&gt;Generation does not yet stream partial output&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://youtu.be/lmZ1Hd7bJkQ?si=nIZnY7euxdIsAoSw" rel="noopener noreferrer"&gt;Basic Concept: Project and Document&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://methodox.itch.io/threads/devlog/1313499/threads-release-notes-v07" rel="noopener noreferrer"&gt;Itch.io Dev Log&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>chatmanagement</category>
      <category>generativeai</category>
      <category>llm</category>
      <category>texteditor</category>
    </item>
    <item>
      <title>Introducing Methodox Threads: Tame the Chaos of Branching AI Conversations</title>
      <dc:creator>Charles Zhang</dc:creator>
      <pubDate>Mon, 12 Jan 2026 01:13:12 +0000</pubDate>
      <link>https://dev.to/methodox/introducing-methodox-threads-tame-the-chaos-of-branching-ai-conversations-2fl5</link>
      <guid>https://dev.to/methodox/introducing-methodox-threads-tame-the-chaos-of-branching-ai-conversations-2fl5</guid>
      <description>&lt;p&gt;Tired of losing brilliant tangents in endless LLM chats?&lt;br&gt;
Frustrated when editing a prompt wipes out your entire history, or when long threads become impossible to navigate?&lt;/p&gt;

&lt;p&gt;You're not alone. Every generative AI user - from casual brainstormers to researchers, writers, and prompt engineers - hits the same wall:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Linear chat interfaces simply weren't built for nonlinear thinking.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That's why we built &lt;strong&gt;Methodox Threads&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Threads is a &lt;strong&gt;linear-yet-branching text environment&lt;/strong&gt; crafted for managing complex ideas and AI conversations - especially those that naturally fork into tangents, nested explorations, and parallel paths.&lt;/p&gt;

&lt;p&gt;And with the new &lt;strong&gt;v0.7 release&lt;/strong&gt;, Threads becomes more than an organizational tool. It a &lt;strong&gt;generative workspace&lt;/strong&gt; for chat management.&lt;/p&gt;

&lt;h3&gt;
  
  
  Core Experience at a Glance
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-pane, tree-like layout&lt;/strong&gt;
Each document lives in its own dedicated, scrollable editor. The whole structure remains visible, so you can see the big picture while diving deep into any branch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Natural branching&lt;/strong&gt;
From any node, spawn &lt;strong&gt;siblings&lt;/strong&gt; (parallel directions) or &lt;strong&gt;children&lt;/strong&gt; (deeper explorations). Build clear, persistent hierarchies without losing context.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrated Gen-AI generation (new in v0.7!)&lt;/strong&gt;
Add a &lt;em&gt;Prompt&lt;/em&gt; to any document and generate AI-assisted content directly into that pane.
Includes:

&lt;ul&gt;
&lt;li&gt;Per-document generation&lt;/li&gt;
&lt;li&gt;Parallel, asynchronous execution&lt;/li&gt;
&lt;li&gt;Read-only “Generating…” overlay during output&lt;/li&gt;
&lt;li&gt;Global system prompt&lt;/li&gt;
&lt;li&gt;Configurable AI providers (OpenAI available now; Gemini, DeepSeek, Ollama, Grok coming soon)&lt;/li&gt;
&lt;li&gt;Custom API endpoints and custom model names&lt;/li&gt;
&lt;li&gt;Secure, auto-saved per-user configuration
Your entire AI workflow now runs inside the branching editor itself.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Iteration-friendly writing environment&lt;/strong&gt;
Full Markdown editing, inline notes, JSON/Markdown export, and a distraction-free dark UI. Refine AI outputs, rewrite prompts, annotate ideas - without ever resetting a thread.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Portable &amp;amp; future-proof&lt;/strong&gt;
A clean JSON structure under the hood, with markdown Gen-AI folder import/export support for v0.6+ projects. Version-control your thought trees with ease.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Threads lets you shape AI conversations the way your mind naturally works: branching, revisiting, exploring alternatives, and layering insights - all without friction.&lt;br&gt;
Picture exploring what-ifs in a worldbuilding document, generating multiple stylistic takes side-by-side, or running parallel reasoning chains for research - all visible at once, never buried in scrollback.&lt;/p&gt;

&lt;p&gt;For a deeper dive into the design motivations and architecture, check out our dev log &lt;strong&gt;“Motivations for Methodox Threads”&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Get Started
&lt;/h3&gt;

&lt;p&gt;Threads is available now as an early public release from Methodox Technologies.&lt;br&gt;
Download it today and start turning chaotic AI interactions into structured, generative thought trees.&lt;/p&gt;

&lt;p&gt;→ &lt;strong&gt;&lt;a href="https://methodox.itch.io/threads" rel="noopener noreferrer"&gt;https://methodox.itch.io/threads&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
→ Follow &lt;strong&gt;@methodox&lt;/strong&gt; for updates, tips, and early access to upcoming provider integrations, streaming AI output, and full multi-model workflows.&lt;/p&gt;

</description>
      <category>chatmanagement</category>
      <category>genai</category>
      <category>texteditor</category>
    </item>
    <item>
      <title>Introducing Methodox Threads: Tame the Chaos of Branching AI Conversations</title>
      <dc:creator>Charles Zhang</dc:creator>
      <pubDate>Mon, 12 Jan 2026 01:13:12 +0000</pubDate>
      <link>https://dev.to/methodox/introducing-methodox-threads-tame-the-chaos-of-branching-ai-conversations-2enj</link>
      <guid>https://dev.to/methodox/introducing-methodox-threads-tame-the-chaos-of-branching-ai-conversations-2enj</guid>
      <description>&lt;p&gt;Tired of losing brilliant tangents in endless LLM chats?&lt;br&gt;
Frustrated when editing a prompt wipes out your entire history, or when long threads become impossible to navigate?&lt;/p&gt;

&lt;p&gt;You're not alone. Every generative AI user - from casual brainstormers to researchers, writers, and prompt engineers - hits the same wall:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Linear chat interfaces simply weren't built for nonlinear thinking.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That's why we built &lt;strong&gt;Methodox Threads&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Threads is a &lt;strong&gt;linear-yet-branching text environment&lt;/strong&gt; crafted for managing complex ideas and AI conversations - especially those that naturally fork into tangents, nested explorations, and parallel paths.&lt;/p&gt;

&lt;p&gt;And with the new &lt;strong&gt;v0.7 release&lt;/strong&gt;, Threads becomes more than an organizational tool. It a &lt;strong&gt;generative workspace&lt;/strong&gt; for chat management.&lt;/p&gt;

&lt;h3&gt;
  
  
  Core Experience at a Glance
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-pane, tree-like layout&lt;/strong&gt;
Each document lives in its own dedicated, scrollable editor. The whole structure remains visible, so you can see the big picture while diving deep into any branch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Natural branching&lt;/strong&gt;
From any node, spawn &lt;strong&gt;siblings&lt;/strong&gt; (parallel directions) or &lt;strong&gt;children&lt;/strong&gt; (deeper explorations). Build clear, persistent hierarchies without losing context.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrated Gen-AI generation (new in v0.7!)&lt;/strong&gt;
Add a &lt;em&gt;Prompt&lt;/em&gt; to any document and generate AI-assisted content directly into that pane.
Includes:

&lt;ul&gt;
&lt;li&gt;Per-document generation&lt;/li&gt;
&lt;li&gt;Parallel, asynchronous execution&lt;/li&gt;
&lt;li&gt;Read-only “Generating…” overlay during output&lt;/li&gt;
&lt;li&gt;Global system prompt&lt;/li&gt;
&lt;li&gt;Configurable AI providers (OpenAI available now; Gemini, DeepSeek, Ollama, Grok coming soon)&lt;/li&gt;
&lt;li&gt;Custom API endpoints and custom model names&lt;/li&gt;
&lt;li&gt;Secure, auto-saved per-user configuration
Your entire AI workflow now runs inside the branching editor itself.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Iteration-friendly writing environment&lt;/strong&gt;
Full Markdown editing, inline notes, JSON/Markdown export, and a distraction-free dark UI. Refine AI outputs, rewrite prompts, annotate ideas - without ever resetting a thread.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Portable &amp;amp; future-proof&lt;/strong&gt;
A clean JSON structure under the hood, with markdown Gen-AI folder import/export support for v0.6+ projects. Version-control your thought trees with ease.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Threads lets you shape AI conversations the way your mind naturally works: branching, revisiting, exploring alternatives, and layering insights - all without friction.&lt;br&gt;
Picture exploring what-ifs in a worldbuilding document, generating multiple stylistic takes side-by-side, or running parallel reasoning chains for research - all visible at once, never buried in scrollback.&lt;/p&gt;

&lt;p&gt;For a deeper dive into the design motivations and architecture, check out our dev log &lt;strong&gt;“Motivations for Methodox Threads”&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Get Started
&lt;/h3&gt;

&lt;p&gt;Threads is available now as an early public release from Methodox Technologies.&lt;br&gt;
Download it today and start turning chaotic AI interactions into structured, generative thought trees.&lt;/p&gt;

&lt;p&gt;→ &lt;strong&gt;&lt;a href="https://methodox.itch.io/threads" rel="noopener noreferrer"&gt;https://methodox.itch.io/threads&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
→ Follow &lt;strong&gt;@methodox&lt;/strong&gt; for updates, tips, and early access to upcoming provider integrations, streaming AI output, and full multi-model workflows.&lt;/p&gt;

</description>
      <category>chatmanagement</category>
      <category>genai</category>
      <category>texteditor</category>
    </item>
    <item>
      <title>DevLog 20260110: Motivations for Methodox Threads - A Conversation Management Tool</title>
      <dc:creator>Charles Zhang</dc:creator>
      <pubDate>Sat, 10 Jan 2026 20:02:39 +0000</pubDate>
      <link>https://dev.to/methodox/devlog-20260110-motivations-for-methodox-threads-a-conversation-management-tool-45c2</link>
      <guid>https://dev.to/methodox/devlog-20260110-motivations-for-methodox-threads-a-conversation-management-tool-45c2</guid>
      <description>&lt;p&gt;Hey folks, Charles here from Methodox. As a developer deeply embedded in the generative AI space, I've spent the last couple of years wrestling with the limitations of LLM interfaces. Today, I want to dive into the motivations behind our latest project, Methodox Threads - a linear, branching text environment built to tame the chaos of long, tangled conversations. This isn't just another note-taking app; it's a structured canvas designed for iterative thinking, especially when collaborating with AI models. I'll keep this log developer-focused, touching on the architecture and implementation choices, while highlighting why this solves a universal pain point for anyone using gen AI tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Chaos in Conversation Management
&lt;/h2&gt;

&lt;p&gt;If you've ever dived deep into a conversation with an LLM like Grok, ChatGPT, or Gemini, you know the drill: things start linear, but soon you're branching into tangents, exploring "what-ifs," and iterating on ideas. The trouble? Existing interfaces suck at handling this complexity.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI's ChatGPT&lt;/strong&gt;: They pioneered branching, which is a step up, but in long threads, navigation becomes a nightmare. Scrolling through endless history to find that one pivotal response? Forget it - context gets buried, and resuming a branch feels like archaeology.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grok (xAI)&lt;/strong&gt;: The side navigation pane is a nice idea on paper, but in practice, it falls flat for multi-branching. Creating parallel explorations requires awkward workarounds, and the UI doesn't scale for deep hierarchies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini (Google)&lt;/strong&gt;: Editing a prompt erases the previous version entirely - no versioning, no undo. It's like the tool assumes your first draft is always perfect, which is laughable for creative or analytical work.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Worse still, none of these platforms let you edit the AI's responses natively. For developers doing prompt engineering or users in creative writing, this is a deal-breaker. You spot an inconsistency or want to tweak for better flow? You're stuck regenerating from scratch, praying the model stays consistent (spoiler: it often doesn't). OpenAI's Canvas tries to address iterative editing, but it's hampered by the underlying AI's memory lapses and lack of true state management.&lt;/p&gt;

&lt;p&gt;This isn't just a dev gripe - it's a widespread issue. Casual users brainstorming ideas, researchers tracking hypotheses, or writers building narratives all hit the same wall: conversations sprawl, branches get lost, and productivity tanks. In a world where gen AI is democratizing creativity, we need tools that empower structured exploration without the friction.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Journey: From Hacks to a Dedicated Tool
&lt;/h2&gt;

&lt;p&gt;I've been hacking at this problem since the early days of gen AI. Inspired by tools like Kobold AI (which excels at local, scriptable story generation), I started building lightweight local management systems. Early prototypes used simple folder structures to mimic threads - each "branch" as a subfolder with text files for prompts and responses. Others adopted custom JSON formats for serialization, allowing easy versioning via Git.&lt;/p&gt;

&lt;p&gt;These worked okay for personal use but lacked visual intuition. Exporting/importing conversations was manual, and scaling to nested tangents felt clunky. That's how I conceived Methodox Threads: born from my startup's focus on AI-enhanced workflows, it's a purpose-built app that evolves these ideas into a robust, user-friendly system.&lt;/p&gt;

&lt;p&gt;From a dev perspective, Threads is architected around a tree-like data model (think hierarchical nodes in a graph database, but lightweight and in-memory for speed). Each node represents a thread segment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Core Structure&lt;/strong&gt;: A root document spawns children (nested explorations) and siblings (parallel branches). This is implemented with a recursive node class in our backend (built on Avalonia for cross-platform desktop use), easily exportable to JSON format.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UI Layout&lt;/strong&gt;: Multi-pane editors, each fully scrollable and synchronized. We use a canvas-based layout with customizable pane width and height and automatic height adjustments for children documents - useful when dealing with deep hierarchies of varying complexities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Features for Iteration&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Markdown support via a lightweight parser (no rendered preview at this moment, but syntax highlighting is a work in progress).&lt;/li&gt;
&lt;li&gt;Document-wise notes and project README for meta-commentary (e.g., "This branch assumes v2 prompt").&lt;/li&gt;
&lt;li&gt;Project serialization to JSON or export to Markdown in folders, making it Git-friendly for version control.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Distraction-Free Design&lt;/strong&gt;: No bloat - just a clean canvas. You can create branches either from the menu or through the hover buttons; In the future we may provide keyboard shortcuts for branching (e.g., Ctrl+B for new sibling) and drag-and-drop reorganization, drawing from IDEs like VS Code.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The goal is to enable deep dives without losing the forest for the threads. For general users, this means organizing a chatbot session on "world-building for a sci-fi novel" into branches like "Character Arcs," "Plot Twists," and "World Lore" - all visible at a glance. For devs, it's a playground for prompt chaining, where you can fork a thread to test API variations without derailing the main flow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Build This?
&lt;/h2&gt;

&lt;p&gt;At its core, Threads addresses the cognitive load of unstructured AI interactions. Gen AI users - whether hobbyists or pros - crave persistence and flexibility. Developers like us need it for debugging prompts, prototyping agents, or managing multi-model experiments (e.g., comparing Grok vs. GPT outputs side-by-side). The general public benefits from a tool that makes AI feel less like a black box and more like a collaborative partner, especially in fields like research, writing, or ideation.&lt;/p&gt;

&lt;p&gt;We're charging a very small courtesy cost to support the continued development of this tool, since developing such an interface is easy for anyone with GUI dev experience - but continuously improving it is what makes it truly usable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future Plans: From Manual to Seamless AI Integration
&lt;/h2&gt;

&lt;p&gt;Right now, Threads is a powerhouse multi-pane text editor, but populating it requires manual copy-pasting from your LLM of choice. That's fine for bootstrapping, but we're gearing up for true integration.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;API Endpoints First&lt;/strong&gt;: We'll start with the OpenAI API, allowing users to "continue" a thread by sending the full branch context plus a system prompt. Imagine right-clicking a node and selecting "Query GPT" - it appends the response as a child node, preserving history.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt Engineering Tools&lt;/strong&gt;: Built-in templating for system prompts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Model Support&lt;/strong&gt;: Expand to Grok, Gemini, and local models via Ollama. Devs can plug in custom endpoints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Version Control Friendly Exports&lt;/strong&gt;: Exporting to scattered Markdown files is very useful since internally many older conversations were archived this way and it looks nicer on Git diff than JSON.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI-Assisted Structuring&lt;/strong&gt;: Use lightweight ML to suggest branches or summarize threads (like how Grok works).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We're iterating fast so file formats and exact usage pattern may change, and will spend more time on this if receive plenty user interest. If you're a dev interested in sharing ideas or a user with feedback, hit us up at Methodox (&lt;a href="mailto:contact@methodox.io"&gt;contact@methodox.io&lt;/a&gt;). Stay tuned for more updates!&lt;/p&gt;

&lt;p&gt;Let's make AI conversations manageable for everyone.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://methodox.itch.io/threads" rel="noopener noreferrer"&gt;Product Page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://coderlegion.com/9505/methodox-threads" rel="noopener noreferrer"&gt;Release Announcement&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>conversationmanagement</category>
      <category>texteditor</category>
      <category>generativeai</category>
    </item>
  </channel>
</rss>
