Posted on

Thoughts on software project sizes – or the pain of scaling.

Thoughts of the Fractional ChiefLines of Code aren’t a quality metric, but they are a cognitive one.

As a codebase grows, the limiting factor isn’t performance or tooling – it’s how much of the system a human can realistically hold in their head.

Past certain size thresholds, individual understanding gives way to structural knowledge, tooling reliance, and eventually team coordination, and this is where documentation, standards, and deliberate technical debt management stop being “nice to have” and become survival requirements.

Ignore this, and critical system knowledge quietly walks out the door one day – possibly for the last time.


While the mythical “LOC” is not an absolute in any way, it is still a usable measure to roughly estimate the complexity and size of a project and by this what you  may need or be able to do with the resources you have. 

The scale of an application, and as echoed by most programmers as a rough measure, is the amount of lines of code one can keep in their head effectively and how much of the “big picture” you start to lose as the code size increases.

This is what I have found to be rough guidelines over time, and while it is my personal perception and a lot of programmers tend to concur, but  may individually have their own varying standards:

  • Tiny – 1,000 lines or less.
    • “Trivial” or a single-problem solution like an individual automation.
    • Easy to keep every tiny detail and every line in your head and know exactly where it is.

  • Small – 1,000-5,000 lines.
    • You know the details of individual modules and functions and within a line range of where to locate a particular set of functionality with ease.
    • You still have deep knowledge of the bulk of the code.
  • Medium – 5,000 to 20,000 lines.
    • You as an individual start losing the detail.
    • You start to focus on the overall structure of the entire project in your head,
      and know roughly what module a particular function exists in and how far in to scroll to find it.
    • You’re only maintaining details of code you’re actively working at this point.
  • Large – 20,000 to 50,000 lines.
    • Detail knowledge is limited to immediate work and everything else is having a birds-eye sense of where stuff is in the code.
    • You’re working off high-level structural knowledge and will start using tools to pinpoint things you’ve forgotten the exact location for.
    • This tends to also be the mental threshold of where an experienced programmer can hold the picture of the entire codebase in their head.

  • Very Large – 50,000 to 250,000+ lines.
    • You’re at multiple team members at this point and you’re treating the codebase as a set of interlocked projects.
    • Coordination for changes is absolutely essential at this point.
    • Only a handful of developers could maintain a complete mental picture of the codebase at this point and would be deep specialists in it having worked on it for years.
    • If this is maintained by a smaller team, or worse, an individual, you need to ensure standards and proper handovers when (not if) people move jobs, where the maintaining experience and knowledge literally walks out of the door every night, and at some point, for the last time. 

Please keep in mind that:

  • These thresholds describe individual cognitive limits, not team or system limits, and that 100k LOC in one tangled domain/monolith is not the same as the same 100k LOC across cleanly separated domains such as that of miniservices (see the article about “when microservices go rogue”).

  • The numbers can be skewed upwards significantly by things like the following (in no specific order)

    • Good coding standards and principles. 
    • Good tools and IDE’s supporting the developers, such as the Jetbrains and similar toolchains.
    • Good specifications. (this is where you start…)
    • Good documentation (Sorry guys – no, code is NOT the documentation – it is the implementation – what you did..)
    • Documentation is what describes in simple terms, how things hang together, used data formats, connectivity, data and service relations, related services etc , and, the intent of what you did – not what you did.
    • Good processes supporting the development coupled with development time frames allowing for the above.
    • Continuously caring about technical debt, setting aside time “decruft” the bad stuff.

(C) (BY) EmberLabs / Chris Sprucefield

Posted on

1PassMapper

Securing Your Development & Deployments with 1PassMapper

Source: https://github.com/emberlabstech/1PassMapper

At EmberLabs®, security has always been at the heart of how we design, build, and deploy software.
One recurring challenge for Dev/DevOps teams is keeping the security of credentials with the practical need for configuration during builds and deployments, while just going about doing their day to day work. 

Far too often, secrets end up hardcoded in Git repositories and code, CI/CD pipelines, or configuration files – creating risks that can later become costly breaches.

This is where 1PassMapper comes in.


Why 1PassMapper?

Modern development teams rely heavily on automation.
Whether you’re deploying Docker containers, Kubernetes workloads, or traditional servers, there is always a need to inject API keys, database passwords, and certificates at “runtime” or build of deployment.

The problem:

  • You want to keep your configuration templates versioned in Git.

  • You do not want to commit sensitive credentials.

  • You want to maintain credentials and settings in a single location, but used in many locations – Single update.
  • When credentials (or other data) rotate, you need builds to automatically reflect those changes.

  • You may need different credentials for different environments without duplicating templates.

1PassMapper solves this by bridging your templates and secure vaults (like 1Password).


How It Works

1PassMapper allows you to:

  • Define your configuration files as templates (with tags like [[sql.host]]).

  • Store credentials in a JSON object either locally or inside a 1Password vault item.

  • Automatically map placeholders in your templates to the correct secret or configuration values during your build process.

This means your Git repository contains only clean templates with placeholders, while the real secrets live securely in 1Password.

Example:

Template (sample-template.json):

{
    "item1": "[[sql.host]]",
    "item2": "[[sql.user]]",
    "item3": "[[sql.pass]]",
    "item4": "[[host.domain]]",
    "item5": "[[cred.UNKNOWN]]"
}
Credentials (in 1Password or a local JSON file):
{
  "sql": {
    "host": "some.domain",
    "port": "3306",
    "user": "root",
    "pass": "someAwesomePassword"
  },
  "host": {
    "domain": "myCoolDomain.com",
    "port": "443",
    "certKey": "certificate.key",
    "cert": "certificate.pem",
    "certpass": "myKeyPassword"
  }
Practical example – build from 1Password CICD/MySecretItem, 
1PassMapper -in sample-template.json -out config.json -vault CICD -item MySecretItem
or, using a local file:
1PassMapper -injson sampleJsonCreds.json -in sample-template.json -out config.json

Build output (config.json):

{
    "item1": "some.domain",
    "item2": "root",
    "item3": "someAwesomePassword",
    "item4": "myCoolDomain.com",
    "item5": "[[cred.UNKNOWN]]"
}
Your secrets never touch Git, and you can freely reuse the same template across environments.

Security Benefits

  • Eliminates hard-coded secrets from Git, Code in general, and possibly Docker images.

  • Centralizes credential storage in 1Password with audit trails and rotation policies.

  • Supports environment isolation (dev, staging, prod) with the same templates, using the Makefile or similar, to determine the template used.

  • Provides consistency across local builds and CI/CD pipelines, by using the same key for common items. 


Development Benefits

  • Less hassle: new developers pull templates without worrying about leaking secrets.
    Just map a key to a secret – it’s reusable!

  • Deduplication: Provides a way to use values, provided by by namespaces, leading to less duplication.
  • Flexibility: supports Json, Yaml, or any other textbased configuration formats, including code. 

  • Resilient pipelines: secrets update automatically when rotated in 1Password.

  • Portability: build in the cloud or locally with the same tooling.


Why EmberLabs® Built This

At EmberLabs®, we wanted a solution that was:

  • Lightweight and developer-friendly.

  • Flexible enough to handle multiple environments.

  • Strongly aligned with secure-by-design principles.

With 1PassMapper, we created a tool that is fast and simple, and integrates seamlessly into existing DevOps workflows,
with the aim to give teams confidence that their deployments are both secure and repeatable, and offers a way to reduce
configuration duplication as an added bonus. 


Summing it up.

In 2025, development speed can’t come at the cost of security. Seriously.
With 1PassMapper, teams can have both: secure credential management and streamlined deployments.

If your organization struggles with keeping secrets safe while maintaining efficient builds, this approach may change how you think about DevSecOps practices.

🔒 Secure your pipelines.
⚡ Accelerate your workflows.
✅ Standardize your deployments.

© EmberLabs® (BY-SA)

Enjoy!