Posted on

When Microservices Go Rogue

Thoughts of the Fractional Chief

How “Nanoservices” Created 500+ Repos
… and why to small will cost you… 
TL;DR

Microservices can help teams move faster—until they explode into nanoservices: tiny, single-endpoint codebases scattered across hundreds of repositories with no boundaries, no cohesion, and no architectural sense. One service per path or method is not microservice architecture. It’s fragmentation.
The escape hatch is not returning to a monolith, nor is it doubling down on microservices. The answer is miniservices: A meaningful middle-path, based on domain-sized, cohesive components that are small enough to understand but large enough to contain meaningful behavior, and which can be split in a reasonable way, should there be a need. “Miniservices” restore control, reduce cognitive load, and prevent microservice spraw. 


1. The Accident Nobody Planned: Microservices Turning Into Nanoservices

This problem always begins with good intentions.

A team wants:

  • Autonomy
  • Scalability
  • Isolation
  • Independent deployment
  • Faster iteration

So they break the system into microservices.

But without boundaries, ownership, or domain thinking, teams quickly slide into:

“One endpoint = one service.”

Before long, the architecture contains:

  • 500+ Git repositories
  • Services with 50 lines of code
  • Duplicated logic (Copy – paste, no shared libraries)
  • Inconsistent conventions
  • Circular dependencies via HTTP
  • Non-stop CI/CD pipelines
  • No unified view of the system
  • No stable contracts
  • Dozens of small deployment failures per week
  • Extreme costs of Lambda functions or operation via containers, when taken to the extreme.

This is not distributed architecture – this is distributed confusion.


2. Why Nanoservices Form: The Root Causes

1. No shared definition of “microservice”

One team treats a microservice as a meaningful domain boundary.

Another treats it as “a new repo for every small piece of logic.”

2. Over-focus on independence

Teams think independence means “split everything” instead of “separate things that change for different reasons.”

3. Fear of merge conflicts or shared ownership

Avoiding coordination by dividing code endlessly is attractive—until everything depends on everything else.

4. CI/CD enthusiasm without restraint

“If it can be deployed separately, it should be deployed separately” leads to chaos.

5. No domain model

Without a clear domain understanding, boundaries follow endpoints—never business logic.

The result:

a fractal explosion of tiny services that nobody understands or controls.

This is just uncontrolled, unimpeded software design without architecture, which leads to a truly unmaintainable mess, that will not just cost you down the line, it will also cost a lot to run it, as it will need one lambda or container per function, and this leads to a massive overhead in the infrastructure, that will cost you as well, as you will have to oversize the infrastructure to cater for the additional overhead. 


3. The Cost: When Services Become Too Small to Be Useful

Nanoservices introduce friction and costs everywhere:

1. Cognitive overload

Developers must understand dozens of repos to make one change.

2. Slow delivery

Every feature requires touching N services, coordinating deployments, and updating contracts.

3. Performance issues

Network latency and cross-service chatter balloon.

4. Operational drag

Monitoring, alerting, dashboards, pipelines—multiplied by hundreds.

5. Versioning hell

Breaking changes ripple through the system like dominoes.

6. Hard-to-find bugs

Failures disappear into the cracks between services.

7. Ownership confusion

Who owns what? Nobody knows.
Or everybody does – which is worse.

8. Maintenance overhead

Since there is often a lot of copy-paste code in such constructs, especially if combined with a specific lack of shared libraries, you will have to fix bugs and security issues, repeatedly.

In short: The “architecture” stops being distributed logic and becomes a massive distributed pain.


4. The False Choice: Monolith or Microservices

When nanoservice chaos becomes obvious, teams usually debate two extremes:

“Should we return to a monolith?”

Pros: simpler, cohesive, easier to understand.

Cons: hard to scale organizationally, risky to evolve if already unstable.

“Should we try to fix the microservices?”

Pros: autonomy, scalability, clear boundaries (in theory).

Cons: the current boundaries are wrong and require a rewrite anyway.

Both extremes are reactionary.

There is a middle ground..
Let’s fix this, calmly and sensibly. 


5. The Better Path: Miniservices

A miniservice is:

  • Organized around a real functional and primarily self-contained domain (think “service”, like “users”, “payment”, “cart”, …).
  • Large enough to contain meaningful logic
  • Small enough to understand
  • Owned by a team
  • Independently deployable
  • Internally cohesive
  • Externally simple 
  • … and, if there are performance or scaling issues, it can relatively easily be split or rewritten, compared to a monolith. 

Think of them as:

“Microservices that actually make sense.”

Not one service per endpoint.

Not one giant ball of everything.

Instead:

  • One domain = one cohesive service (user, wallet, cart, ..)
  • A handful of boundaries instead of hundreds
  • Shared common code and functions  that makes sense. 
  • Stable, well-defined contracts
  • Clear ownership
  • Fewer repos
  • Fewer deployments
  • Fewer failures
  • Easier onboarding
  • Easier refactoring

Miniservices operate as building blocks—not confetti.


6. What Miniservices Look Like in Practice

Each miniservice contains:

  • Its domain logic
  • Its data store (optional but common)
  • Internal modules, not external services
  • Coherent APIs
  • Shared patterns within the domain
  • Consistent error handling
  • Meaningful boundaries

Examples:

Instead of nanoservices like:

user-auth-post/
user-auth-get/
user-session-put/
user-session-delete/


in a microservicce, you typically have:

auth-service/
session-service/

 

and in a broader miniservice:

Instead of 50 small repos for billing, you have:

billing-service/

That includes:

  • invoicing
  • tax rules
  • charge retries
  • refunds
  • billing events

Not because it’s “big”—but because these things belong together, and that it makes perfect sense to group these together in terms of business logic. 


7. The Benefits of Miniservices Over Nanoservices

1. Drastically reduced operational overhead

Monitoring 20 services is hard.

Monitoring 200 is impossible.
Monitoring 500+ – forget it. 

2. Fewer deployments, fewer failures

Bigger services = fewer moving parts = fewer incidents.
When you build a mini service, the moving parts has been integrated tighter directly in the code and not as an external service which means that development  and testing of the integrations and function works in a wholly different way, and you do not run the same risk of introducing bugs by external changes in a foreign api endpoint. 

3. Clearer domain ownership

Teams know exactly what they own and why, and what is in the module/service, and the function of it. 

4. Easier onboarding

New developers learn domains, not random endpoints.

5. More stable contracts

Domains change less frequently than paths and endpoints.

6. Faster delivery

Fewer cross-service dependencies mean less coordination.

7. Real autonomy

Teams can make changes within their domain without touching external services every time.

Miniservices preserve the benefits of microservices without the pathological fragmentation, all while making the domain and function clearer and it’s use more intuitive.


8. A Warning: “Miniservice” Doesn’t Mean “Mini Monolith”

Miniservices are not monoliths in disguise.

They still follow good distributed design:

  • Independent deployability (you redeploy the “user” service etc)
  • Clear domain boundaries (you don’t mix the user and payment services)
  • No shared mutable state
  • Well-defined and documented APIs – use formats like OpenAPI and tools like Swagger/ApiDog to design, generate and test. 
  • Decoupled schemas
  • Versioned contracts
  • No hidden cross-service calls. 

The difference is size and cohesion—not looseness.


9. How to Transition From Nanoservices to Miniservices

1. Identify domains, not endpoints

Look for natural groupings:

  • Billing
  • Authentication
  • Search
  • Cart
  • Profile
  • Notifications
  • Inventory
  • Recommendations

2. Collapse nanoservices into cohesive units

Merge them based on logic, not repo history.

3. Introduce clear domain ownership

One team per domain. No exceptions (well, maybe in small teams – there’s always an exception to the rule).

4. Reduce inter-service chatter

Replace many small calls with internal modules.

5. Establish API contracts that reflect domain responsibilities

Stop mapping endpoints one-to-one.

6. Adopt internal libraries instead of isolated repos

Shared logic doesn’t require a new service.
You could use a shared repo that defines commonly used things such as the database connectivity functions, reading of config files, logging, and other things commonly re-used but rarely changed. 

7. Avoid premature splitting in the future

Split only when two domains evolve independently – be clear about why the split is needed and why you do it.


10. Closing: The Problem Wasn’t Microservices — It Was the Lack of Domain Thinking

Microservices didn’t fail.

The team never defined what a service was, and went to town with the notion that everything is a service, even when it is not.

Nanoservices are what happens when:

  • Endpoints become architectural boundaries
  • Repos become the default unit of structure
  • Autonomy is mistaken for fragmentation
  • Teams split before they understand the domain

The solution is not to swing back to monoliths.

The solution is to adopt sensible, stable, cohesive miniservices.

Miniservices give teams:

  • Autonomy without chaos
  • Flexibility without fragmentation
  • Scalability without sprawl
  • Clarity without over-simplifying
  • Boundaries without bottlenecks
  • Funtional definition and confinement

The solution is not to swing back to monoliths.
The solution is to adopt sensible, stable, cohesive miniservices.

So how do we define a “miniservice”? 

The “miniservice” is a sensible and practical middle-ground approach between the often-unscalable monolith and the pathological microservice hell (nanoservices). It is not a hard definition of size, but a set of logical and practical principles:

  • Logical Cohesion:
    It is a logical and practical grouping of components that belong together, based on a cohesive functional domain (e.g., Billing, Cart).
  • Understandable Scope:
    It isolates business logic into a size that can be easily understood, maintained, and worked on by a single small team.
  • Architectural Flexibility:
    It maintains the ability to be independently deployed and can be split reasonably if independent evolution is genuinely required later.
  • Optimized for Cost and Performance:
    It reduces the massive operational overhead and cross-service communication costs associated with nanoservices while retaining the ability to scale horizontally or vertically more easily than a monolith.

The architecture becomes something everyone can understand—and something the company can grow on, with less costs. 

 (C) (BY) EmberLabs® / Chris Sprucefield

 

Posted on

Infosec – Time for a New Class of “DevSec”?

Thoughts of the Fractional Chief
TL;DR
Most companies leave a gap between development and security. Developers move fast, and infosec steps in too late, when issues are already hard and expensive to fix.
A new role—DevSec—fills that middle space. DevSec catches insecure patterns early, filters noisy alerts, guides developers with simple and practical advice, and prevents small mistakes from becoming real vulnerabilities. It’s not a replacement for dev or infosec, but a missing function that keeps products safer, reduces rework, and helps teams move faster with fewer surprises.


2025-12 – By Chris Sprucefield.

Most companies still separate development and security into two distant groups. Developers build features, ship code, and keep things going. Infosec teams respond to alerts, run scans and write long lists of issues that often arrive too late in the development cycle to fix without disruption.

This split leaves a gap in the middle.

In the meanwhile, nobody is watching the small decisions and habits that create security risks long before anyone notices them. By the time a formal security review happens, the code has settled and dependencies have grown. The system has become harder to change, and at that point, problems are expensive, frustrating, and often pushed aside because deadlines are tight.

We need a role that fills this gap.

For now, I’ll call it DevSec—not an existing title, but a new class of function designed to sit between development and traditional infosec, focused on preventing problems before they turn into incidents or audits.


What DevSec Is (and Isn’t)

  • DevSec is not a developer who happens to care about security.
  • DevSec is not an infosec analyst who steps in after the fact.
  • DevSec is not a pipeline engineer building scanners or automations.

Instead, DevSec is a practical, hands-on generalist who understands enough about code and coding in general, and enough about security to evaluate risks as they appear and is reported by supporting tools or reviews, not months later. They don’t need deep, specialized expertise in every system, but they need the ability to look at a piece of code or an alert and decide:

  • Is this threat real, or is it noise?
  • Could this pattern cause trouble later if not fixed?
  • Does this issue affect our actual product or environment?
  • What is the simplest fix?
  • … and how do we prevent it from recurring?

The point is not to replace security teams or developers. The point is to augment and support devs at an early stage, prevent avoidable work and avoidable failures by catching issues early – when they are still easy to fix.


Why This Role Matters

1. Developers aren’t meant to be full-time security analysts

Most development teams already deal with tight timelines. Handing them a long list of scanner warnings only slows them down. They need someone who filters out the noise and highlights the few things that truly require attention.

2. Traditional security looks at problems too late

Security teams often depend on completed features, logs, or external scans. They step in only after code is written, patterns are set, and risky habits have already spread through the codebase. Furthermore, traditional infosec teams does often does not have the budget for this, and are they are typically ill-equipped to review code or being very hands-on, as their primary focus is typically on process, procedure and higher level systematic security. 

3. The space in between is where most vulnerabilities are born

Unsafe defaults, repeated shortcuts, overly permissive functions, straight AI copy and paste issues, SQL injections and many other common bad or lazy practices, and forgotten test logic – these are the seeds of future incidents, and they form quietly in day-to-day development, especially when the pressure to deliver is high, or perhaps, the development team is young. 

A DevSec function sees these before they harden into real vulnerabilities.


What DevSec Actually Does

Here’s what this role focuses on:

  • Reviewing code for insecure patterns without requiring full developer depth.
  • Triaging alerts from automated tools to identify what matters and what doesn’t.
  • Spotting bad practices early and nudging the teams to correct them.
  • Explaining risks in simple, practical and actionable terms.
  • Offering targeted suggestions for fixing problems now and avoiding them later.
  • Keeping the security posture aligned with how the product actually works.

This is early-stage, practical prevention—not bureaucracy, not policy writing, and not firefighting.


The Benefit to the Whole Team

With DevSec in place:

  • Developers get fewer false alarms and clearer guidance.
  • Security teams receive fewer late-stage surprises.
  • Risk is primarily handled at the point of creation instead of after release.
  • Bad habits are corrected early, reducing long-term maintenance pain.
  • The product becomes naturally more secure without slowing down delivery.
  • When there are external threats, Developers will get help to determine the focus for fixes.

This helps companies avoid the familiar cycle of security issues suddenly piling up right before an audit, or surfacing only after customers report something unexpected.

A nice side-effect is that it is highly likely to save money for the company by less late-stage costly fixes and revisits (time that can be spent on developing products), all while delivering a safer product, which in turn will improve the goodwill and market reputation among it’s customers.


Why DevSec Is Needed Now

Many companies are now building faster, integrating third-party tools constantly, and relying heavily on automated systems. The pace of change means small missteps compound quickly. Traditional security functions can’t keep up with that pace if they’re only brought in late. Developers can’t shoulder responsibility for everything either—they’re not equipped, and it’s not realistic.

Classic Infosec teams primary focus is on the bigger picture, processes and procedures, and while very good at what they do, they are typically not very hands-on. 

The midpoint has been empty for too long.

A dedicated DevSec role fills that gap and brings steady, ongoing security awareness into the daily rhythm of development, without overwhelming anyone.

This isn’t about introducing another layer of process. It’s about putting someone in the spot where issues actually appear—right where code is written, habits form, and risks begin.

DevSec is the missing piece that makes that possible.

 

(C) (BY) EmberLabs® & Chris Sprucefield

Posted on

A Dev Suppoort tool for GitHub

Every now and then, one may wish to have had a simple script to check out or
update all the github repos you have, at once, using a single command. 

This can be for the reason of backups, or simply, check out new ones that you don’t have etc,
or, keeping a local up-to-date copy of everything. 

Well, I wrote a similar little script some time back for BitBucket.
(http://emberlabs.tech/2023/05/08/easily-mirror-a-bitbucket-repo/)

This should work just fine on pretty much any Linux or Mac install (or similar with a bash). 

Nuf of the talk! Show me the code!! =D

#!/bin/bash
# GH Mass checkout. 
# V 1.0 - 2025-11-23 Spruce
# Prerequisites: curl and jq
# Place your GH PAT in ~/.gitpat (contents / read-only needed)
# Relies on SSH for cloning/pull, add your public key to GH. 

ORG="[Your ORG name here]"
PAT="[Your GH PAT]"
CDIR="$(pwd)"

if [ "${PAT}" == "" ] 
then
    if [ -e ~/.gitpat ] 
    then 
        PAT="$(grep github ~/.gitpat | head -1)"
    else
        echo "No PAT available."
		echo "Create a PAT (Profile -> Settings -> Developer settings -> PAT"
		echo "Select fine-grained,  account/org, and all repos, then select contents with read-only."
        exit 1
    fi
fi

function GetRepoLists {
    echo ""
    echo -n "Updating the repolists : "  
    rm -f repolist.*
    for page in $(seq 1 20) ; do
        echo -n "${page} "
        curl -s \
            -H "Accept: application/vnd.github+json" \
            -H "X-GitHub-Api-Version: 2022-11-28" \
            -H "Authorization: Bearer ${PAT}" \
            "https://api.github.com/orgs/${ORG}/repos?per_page=100&page=${page}" \
            | jq -r '.[].ssh_url' > repolist.${page}
        if [ ! -s "repolist.${page}" ]
        then
            rm -f "repolist.${page}"
            break
        fi
    done
    echo ""

    cat repolist.* > repolist
    rm -f repolist.*
}


function ClonePull {
    mkdir -p ${ORG}

    while IFS= read -r URL; do
        repo="${URL##*/}"
        repobase="${repo%.git}"
        if [ -e "${ORG}/${repobase}" ]
        then
            echo "Updating ${ORG}/${repobase}" 
            cd ${ORG}/${repobase}
            git pull
            cd ${CDIR}
        else
            echo "Cloning  ${URL}"
            cd ${ORG} 
            git clone ${URL}
        fi
    done < "./repolist"
}

function GitExec {
	GITCMD="$1"
	cd ${ORG}
	for repo in *
	do
		echo "Executing ${GITCMD} in ${ORG}/${repo}" 
		cd ${CDIR}/${ORG}/${repo}
		git ${GITCMD}
		cd ${CDIR}
    done 
}

# --- Actions -------------------------------------------------------

case "$1" in
    full)
        GetRepoLists
        ClonePull
        ;;
    pull)
        GitExec "pull"
        ;;
    fetch)
        GitExec "fetch"
        ;;
    *)
        echo "Usage:"
        echo " "
	echo "       full      Update existing and clone new repos."
        echo "       fetch     Execute a fetch on all existing repos."
	echo "       pull      Execute a pull on all existing repos."
    ;; 
esac

Njoy!!

Posted on

Some Language oddities

Let’s start with some the classics:

Our ever “friend” JavaScript
console.log("b" + "a" + +"a" + "a") ;
“baNaNa”? Really?

console.log( [] !== [] ) ;
.. evaluates to “true”? Really?? JS?
an empty array is absolutely truly different from, in values, shape, and form, from…  an empty array?

Then, Typescript?
function (a: any, b: any) any

Typescript: let’s enforce types on data and vars.
Dev: Nope!

 

 

Posted on

1PassMapper – updated.

Not too long ago, we published this article about the 1PassMapper.
( article: http://emberlabs.tech/2025/09/18/1passmapper )

Guess what?
It just got updated with a couple of new items to make it even more powerful!

We just added the flags -prefix <path> and -token <filename> .

The -prefix allows you to have the same build script prepend the source paths in the template, using a single template for all your environments, instead of multiple files, moving the prefix of the path into an argument.

In the template, you would use something like [[some.path.to.cred]],  and with the -prefix dev , the real path would be: [[dev.some.path.to.cred]].

Replace the `-prefix dev` with -prefix prod, and now, you would use the “prod” source path in your credentials file.

The -token <filename>` ?  – this is how you can easily switch from using the default 1Password token file to some other token file allowing you the use of multiple 1Password accounts for different needs.

Get your team onboard with keeping the creds out of the GIT!

Get your copy today – It’s free!
https://github.com/emberlabstech/1PassMapper

Njoy!!

 

Posted on

1PassMapper

Securing Your Development & Deployments with 1PassMapper

Source: https://github.com/emberlabstech/1PassMapper

At EmberLabs®, security has always been at the heart of how we design, build, and deploy software.
One recurring challenge for Dev/DevOps teams is keeping the security of credentials with the practical need for configuration during builds and deployments, while just going about doing their day to day work. 

Far too often, secrets end up hardcoded in Git repositories and code, CI/CD pipelines, or configuration files – creating risks that can later become costly breaches.

This is where 1PassMapper comes in.


Why 1PassMapper?

Modern development teams rely heavily on automation.
Whether you’re deploying Docker containers, Kubernetes workloads, or traditional servers, there is always a need to inject API keys, database passwords, and certificates at “runtime” or build of deployment.

The problem:

  • You want to keep your configuration templates versioned in Git.

  • You do not want to commit sensitive credentials.

  • You want to maintain credentials and settings in a single location, but used in many locations – Single update.
  • When credentials (or other data) rotate, you need builds to automatically reflect those changes.

  • You may need different credentials for different environments without duplicating templates.

1PassMapper solves this by bridging your templates and secure vaults (like 1Password).


How It Works

1PassMapper allows you to:

  • Define your configuration files as templates (with tags like [[sql.host]]).

  • Store credentials in a JSON object either locally or inside a 1Password vault item.

  • Automatically map placeholders in your templates to the correct secret or configuration values during your build process.

This means your Git repository contains only clean templates with placeholders, while the real secrets live securely in 1Password.

Example:

Template (sample-template.json):

{
    "item1": "[[sql.host]]",
    "item2": "[[sql.user]]",
    "item3": "[[sql.pass]]",
    "item4": "[[host.domain]]",
    "item5": "[[cred.UNKNOWN]]"
}
Credentials (in 1Password or a local JSON file):
{
  "sql": {
    "host": "some.domain",
    "port": "3306",
    "user": "root",
    "pass": "someAwesomePassword"
  },
  "host": {
    "domain": "myCoolDomain.com",
    "port": "443",
    "certKey": "certificate.key",
    "cert": "certificate.pem",
    "certpass": "myKeyPassword"
  }
Practical example – build from 1Password CICD/MySecretItem, 
1PassMapper -in sample-template.json -out config.json -vault CICD -item MySecretItem
or, using a local file:
1PassMapper -injson sampleJsonCreds.json -in sample-template.json -out config.json

Build output (config.json):

{
    "item1": "some.domain",
    "item2": "root",
    "item3": "someAwesomePassword",
    "item4": "myCoolDomain.com",
    "item5": "[[cred.UNKNOWN]]"
}
Your secrets never touch Git, and you can freely reuse the same template across environments.

Security Benefits

  • Eliminates hard-coded secrets from Git, Code in general, and possibly Docker images.

  • Centralizes credential storage in 1Password with audit trails and rotation policies.

  • Supports environment isolation (dev, staging, prod) with the same templates, using the Makefile or similar, to determine the template used.

  • Provides consistency across local builds and CI/CD pipelines, by using the same key for common items. 


Development Benefits

  • Less hassle: new developers pull templates without worrying about leaking secrets.
    Just map a key to a secret – it’s reusable!

  • Deduplication: Provides a way to use values, provided by by namespaces, leading to less duplication.
  • Flexibility: supports Json, Yaml, or any other textbased configuration formats, including code. 

  • Resilient pipelines: secrets update automatically when rotated in 1Password.

  • Portability: build in the cloud or locally with the same tooling.


Why EmberLabs® Built This

At EmberLabs®, we wanted a solution that was:

  • Lightweight and developer-friendly.

  • Flexible enough to handle multiple environments.

  • Strongly aligned with secure-by-design principles.

With 1PassMapper, we created a tool that is fast and simple, and integrates seamlessly into existing DevOps workflows,
with the aim to give teams confidence that their deployments are both secure and repeatable, and offers a way to reduce
configuration duplication as an added bonus. 


Summing it up.

In 2025, development speed can’t come at the cost of security. Seriously.
With 1PassMapper, teams can have both: secure credential management and streamlined deployments.

If your organization struggles with keeping secrets safe while maintaining efficient builds, this approach may change how you think about DevSecOps practices.

🔒 Secure your pipelines.
⚡ Accelerate your workflows.
✅ Standardize your deployments.

© EmberLabs® (BY-SA)

Enjoy!