Was out flying, and decided to do a quick one for the local pizza place while there…
TL;DR:
Register a trading name (and TM), use it as your employer on your CV, describe your roles, and treat clients as projects, not jobs.
This provides a seamless, professional timeline—no gaps, and full legitimacy.
The longer reasoned version…
If you are looking for a new position, gaps in your CV can raise questions, especially if you work as a contractor or are self-employed.
Here’s a practical approach, that can solve these issues, and provide a solid timeline.
Register a Trading Name:
Even as a sole trader, register a trading name for your business. In most countries, this is inexpensive and can also be registered as a trademark (™), giving your business added credibility. Just choose the trading name you want to work with, and typically, within EU, a national registration is ~100 euro.
This also gives you a VERY strong legal protection for your trading name, and you can prevent others from using it in your class.
It is even stronger protection than registering a busines name with the companies house…
Also, register this as a “wordmark” for maximum protection, as this enables you to incorporate the name in any graphical setting, as otherwise, a trademark would be linked and limited to the specific graphical representation that has been registered, as a wordmark allows you to use it in ANY graphical, logo or font setting, as it is the “word” itself that is protected, not how it is represented.
Trade Under Your Registered Name:
Conduct all your business activity using your trading name.
Use this name consistently on all invoices, contracts, and professional correspondence.
Present Your Trading Name as Your Employer:
On your CV, list your trading name (and TM, if registered) as your employer.
Your self-employment under your own company or trading name is a perfectly valid and legal form of employment, and way of representing yourself on the market.
List Roles, Not Clients:
Instead of listing each client as a separate employer, describe your roles and responsibilities under your trading name, and optionally mention key clients or projects as examples.
This creates a single, continuous timeline of employment and avoids unexplained gaps.
Example:
Period | Employer | Role | Details |
---|---|---|---|
2015–Present | ACME Consulting™ | Owner/Consultant | Provided IT consulting to various clients… [list of clients, and summary types of work / roles for the clients] |
The benefits?
Ensures a consistent work history with no unexplained gaps.
Enhances professional image with a registered TM.
Allows you to maintain confidentiality about clients if needed.
Factually correct: Self-employment under a trading name is a legal form of employment.
Common practice: Many experienced contractors use this method to avoid the appearance of gaps.
Protects client confidentiality and avoids cluttering the CV with short stints.
Removes bias – Employers often hesitate if they see gaps or too many short contracts; this format presents stable, ongoing work.
Possible caveats?
Some HR may request clarification about what you did for that period—be ready with a project/client list if asked.
Any non-client periods, can be explained as self-investment in training, working on internal products and otherwise.
In some industries (e.g., government, finance), disclosure of clients may still be required at later stages.
This is technically not a problem, and you would have the invoicing and other items to show, as well as self-investment as cover for gaps.
Good luck!
Consultants/Contractors vs Employee – A side by side comparison
Which one? Or both?
A seemingly never-ending and long-standing dilemma for many decision-makers and businesses is whether to use employees or consultants/contractors (hereafter, contractors).
When choosing between contractors and employees, it’s a common perception that contractors are for short-term and employees are for long-term commitments, and contractors are very expensive, but this does not generally hold true as of today.
In either case, one needs to consider real factors like flexibility, cost-effectiveness, and specialized expertise. Contractors offer targeted skills ideal for equally short-term specific projects without long-term commitments, or general long-term commitments and continuity, if properly managed by the provider, while employees has the ability to bring continuity and deeper integration into your organization’s culture. It is all down to what your priorities and goals are.
Shifting preferences.
Also, in today’s world, the sentiment of many highly skilled and professional workers have shifted a lot as of recent years, with them becoming contractors rather than employeers, and being a contractor have become a new common form of employment, offering a greater flexibility as seen to the business side of things, and one that can equally come to become a longer term part of the business with an equal level of commitment, if done right.
The longevity of commitment claim is especially at stake here, as it is today common for employees to change jobs in the range of every two to four years, sometimes more often, negating the long-term engagement claim, and price-wise, there is not much of a realistic difference between the two anymore.
The net result of this, speaks in favor of the contractor, not just purely from the business perspective.
Side by side:
Take a look at these two quick summaries, side-by-side examples on comparable levels:
Contractor/Consultant Contractor: 550/day over 44 weeks. No additional costs of:
Benefits (their selling points)
|
Employee Permanent employee: 75K salary over 44 weeks. Additional costs to consider:
Other types of common costs (benefits) include (estimates per year):
Office space average cost across EU p/a and employee: €7500 (range: 3.9-15k/y) |
Total: 550/day for 44 weeks – €121K | Total Year 1: €121 – €148K (€134.5k) Total Year 2+: €108 – €135k (€121.5k) |
Today, and so far in this comparison, it is pretty much like-for-like, cost-wise, but with added benefits for both parties, but it does not stop there.
The contractor, in greater self-governance, albeit at somewhat greater risks and contractual committments, and for the business, it’s a more well-defined commitment with a known entity and lesser set of risks in cases of non-performance and similar issues.
The hidden costs:
Additionally, there are likely considerable company overheads in HR, Legal, and compliance due to the costs required to maintain employee records, manage disputes management, conduct reviews, provide training, and many other functions. These are commonly not required for contractors, due to their contractual self-governance.
It doesn’t stop there, as for the staff, the business usually has other overheads not covered in the above, such as parking, office space, heating, energy, office supplies and additional factors that needs to be added to the costs of the employee, which for the contractor is mainly or wholly covered by themselves at their own expense. The cost of this has been summarized above as a range, and is based on 15sqm/employee and year, as an average across EU, for both cowork and outright rented spaces.
As for the longevity and company culture, the relatively small cost of including contractors to company events, parties, etc, will be greatly outweighed by the benefits, and still be a tax-deductible, as it is now supplier entertainment. One just needs to be careful about the anti-bribery regulations.
As you can see, after the first year, contractors and employees are on par or cheaper, without the loss of productivity or protection for your business, and in the end, with all things considered, it is a win-win situation for both parties, business and contractor, offering the greater flexibility.
Summary:
If you take all of the hidden business overheads as listed above into account, you will likely soon see that the contractor is actually the cheaper option overall, with the same or greater business benefits.
The primary question now comes down to:
If the answer is yes, then, you have just widened your recruitment basis and access to qualified staff.
… and keeping them out of the code in GIT?
Let’s say you have a larger config file with a pile of items that you want to fill in while deploying, but don’t want to keep in git, such as settings or credentials?
At the same time, you want to test things while developing, without having to set up credentials etc all the time, copying files in and out, or quickly configure things for deploying / testing against different targets..
This obviously does not take away from the use of live secret management,
such as aws secrets manager and others, but is suitable for more “static” solutions,
or fundamental configurations required for base setups.
For the local shell, add the following to your .bashrc or similar:
alias 1passlogin="eval \$(op signin)"
Pre-requisites;
Steps:
{
"cred": [
{ "key_1": "value 1" },
{ "key_2": "value 2" },
.... and so on.
}
}
#!/usr/bin/env bash
ROOT="$(dirname "$(realpath "$0")")"
# Check if the required arguments are provided
if [ "$#" -ne 2 ]; then
echo "Usage: $0 <1pass vault/item> <template_file>"
exit 1
fi
# Read the JSON file and template file
path="$1"
template_file="$2"
# Get the credentials from the 1Password
op read "op://${path}/json" > 1p-credfile.json
# Validate that the JSON file exists
if [ ! -f "1p-credfile.json" ]; then
echo "Error: JSON file '1p-credfile.json' does not exist."
exit 1
fi
# Validate that the template file exists
if [ ! -f "$template_file" ]; then
echo "Error: Template file '$template_file' does not exist."
exit 1
fi
# Read the JSON content and template content from the files
json="$(cat "1p-credfile.json")"
template="$(cat "$ROOT/$template_file")"
# Use jq to traverse the JSON array and replace placeholders
result=$(echo "$json" | jq -r --arg template "$template" '
reduce .cred[] as $item ($template;
reduce ($item | to_entries[]) as $kv (.;
gsub("\\[\\[" + $kv.key + "\\]\\]"; $kv.value))
)
')
# Output the replaced string
echo "$result"
# Clean up after ourselves.
if [ -e 1p-credfile.json ] ; then rm -f 1p-credfile.json ; fi
{
"configValue1": "[[key_1]]",
"configValue2": "[[key_2]]"
}
Putting it all to work…
In your Makefile or similar, use the script as:
./MapCreds-onepass.sh “<vault>/<item>” “<source_file>” > “<target_file>”
Example:
./MapCreds-onepass.sh CICD/app-config config-template.json > config.json
and the output – config.json, would become ;
{
"configValue1": "value 1",
"configValue2": "value 2"
}
… just don’t forget to mark the resulting “config.json” as an exluded file in git!
(and obviously, this would work on other text files as well, including source code for replacing values / settings having the source in 1pass.)
Njoy!
Legacy applications…
… the regular pain in everyone’s back.
Do you have a legacy application application that needs updating or even a rewrite?
The failure of application rewrites often involves a combination of factors, and among these are the following common culprits and factors, being the primary reasons for the failure – not the language it is written in, or the conversion from one language to another or that language not being able to do the work of the other language.
Any turing complete language could do the job, literally, even such a hellish language as brainf*ck..
This shows that the language in itself, is not the problem, and it more often than not comes down to being a case of – what is the right tool to do the job?
I prefer working with Go, as it is a modern language that works equally well on almost any platform, and it is fast to develop and get working results in.
It also has the benefit of being close enough to many other languages, that devs of those, can understand it without any issues.
Also, Go, does typically not really have the inherent issue that many other languages like Python, Java or C++ suffers from with the “legacy library hell”, as it has a modern take on this, with efficient ways of keeping up to date through various mechanisms.
So what IS the big issues then?
Let’s say we start with a classic “problem child” of yesteryears – COBOL, symbolizing the language landscape of more “mature” languages, and using this as an example, as the very same core issues can be applied to pretty much any language.
It really doesn’t matter what the language is, but the underlying problems are commonly the same for all legacy applications.
Lets look at a few key points.
1. Lack of Documentation and Knowledge
2. Underestimating System Complexity
3. Scope Creep and Poor Requirements Gathering
4. Mismatch Between New and Existing Systems
5. Cultural and Organizational Resistance
6. Testing Challenges
7. Skill Gaps
8. Cost and Time Overruns
9. Failure to Preserve Legacy Business Logic
Key Point: Lack Of Documentation Amplifies All Other Problems
When documentation is lacking, every other issue is compounded:
Solution Approaches?
Why Modern Languages Like Go?
Conclusion
Rewriting an application is a significant undertaking, but with careful preparation, stakeholder alignment, and the use of modern tools and languages, it can transform outdated systems into robust, efficient platforms. By addressing risks head-on and employing best practices, organizations can successfully modernize their applications while minimizing disruption and maximizing value.
Do you have legacy applications that needs reworked, modernized or documented?
.. all while using modern tools, technologies, and keeping future maintainability and support in mind?
Let’s talk.
Thoughts on AI, Security and practical day to day use.
As mentioned before, I am involved in R&D on a similar branch with “AI Hardware“, and this brings me to the AI and it’s more general use.
These days there is almost a competition going on out there about the use of AI wherever possible, regardless of whether it’s needed, practically usable, it actually serves a purpose or not, and it’s quite understandable because it’s quite hard to sell a product and be competitive without having the word AI crammed in somewhere in the sales pitch these days.
So let’s have a little bit of a pragmatic look at it.
So what is an AI?
Most of the Ai’s today are LLM’s (Large Language Models) based on software that emulates neurons and uses large masses of data for training material (internet).
There are basically three models and how you train the AI’s, but most importantly no AI’s are trained on the fly because that would effectively destroy the neural network setup while in flight as is now.
We are simply not there with dynamic AI LLM’s, just yet..
You would only retrain the models based on existing data plus any additional data gathered during training sessions, as the training is very taxing on computational and financial resources.
this in turn means that live leakage has a very low risk but the risk for future leakage is still there due to the incorporation of training material that may be gathered from questions and other supplied data.
As always, there are of course variations to the above, but it gives you a rough insight as to what it is and how it works.
A little bit of history from a developer’s perspective.
In the past when there were only books and manuals, the developers had to relate to these, and often know them by heart to actually use it. The amount of information was quite limited and it was fairly easy. Essentially everything was written from scratch.
As we know, history happened, and Internet came to be, and with it, things like Google. Open source solutions exploded, and then came the help sites to go with it and anything development.
Sites like stackexchange and many others came to be and code samples were shared between the users. Because of the perceived security risks, many developers were banned by the companies to use Internet to search for solutions, even for common problems because of the “risks” involved, as you could get “hacked” by a copy/paste, or you could leak information about your ip or other precious items.
This even in cases where it was generic information such as an error code and what caused it in regards of a specific product. Eventually, “internet” was generally accepted, and came to be part of everyday business life.
The primary risk of this was/is , as always, related to anyone who uncritically made a true full copy/paste after providing enough specific enough information for a possible hacker to write a malicious piece of response that would work in the specific environment and solution, and the user subsequently, without any consideration or review, and there being no peer review on commits, implemented this in the production code.
The exact same can be used and said for any AI, whether it’s in app, typically a developer IDE, or external like ChatGPT’s website, and many infosec teams seems to ignore the fact that you can search and look up search terms on Google to see what’s being searched, providing the exact same purported leakage mechanism, near, if not real-time, whereas this actually and typically does not apply to AI’s. Never mind questions on a website like stack exchange that it will be forever in fulltext, where the AI question will be ephemeral and not be reused as a verbatim ready made answer, despite it possibly becoming part of the training material at some point down the line, as numerical weights, and not actual full-text.
In short – an AI will not be able to recall or reproduce a specific question from another user because that’s simply not how the AI and LLM’s works. The sessions work in isolation, but the data may later be used as training, as numerical “weights” for a specific item, never as plaintext data.
Why the AI instead of “Google”?
As Google and others are mainly about static content, the AI is highly dynamic and can actually understand what you want, and quickly narrow down the answer to what you need, without all the “fluff” and having to wade through endless amounts of text and sites to get what you were looking for, and they can do this by incorporating third-party sources or doing searches on your behalf to gather the information, and this is exactly what makes the AI so useful – speed and the output limited to what you actually asked for, and this is why the AI is quickly becoming the “Google” replacement.
An example:
Compared to Google et al, If you’re stuck in a problem, you can describe the type of problem you have and get the reasoned argument with explanations specific to the problem on how to solve it, without you actually assembling and parsing the information yourself, something that can be very tedious.
AI:
(This passes my sanity inspection as a proposal for a solution…)
…. versus Google:
What about Information Security then?
A couple of ground rules when it comes to dealing with AI’s:
Again, always be the second opinion – never just copy/paste – actually look at what was presented and make your own informed decision of – does this make sense, and never assumed that the answer is 100 percent correct.
This is what any responsible developer would do, and if there was malicious intent it would be far easier to do it themselves, right there, rather than go to the AI to get it done, as the developerwould not need to explain to the AI what the environment looks like and how the specific exploit should be implemented. That would be information you already have.
The goal of the Infosec team here is to rather than just ban the users from using it, embrace it, but educate the staff about how to use it safely!
Prescribe the pragmatic safer ways on how you can interact with the AI’s, because in the end they are incredibly useful tools that will not go away, just like Internet didn’t go away and eventually was forced to be accepted despite the security teams kickings and screamings.
Trust me, it will be used no matter what anyone say, because it is just too useful not to use, and the likelihood of this happening is even higher in time and resource pressured teams, where a lot of tedious work can be simplified and done very quickly compared to the alternatives, and it is far better to have a mutual understanding of good practices, do’s and don’ts, rather than a skunkworks divisions.
Additionally, keep in mind that it is an indisputable fact that the absolute majority of security tools today use AI, be that code monitoring and validation, security tools like antivirus, api scanners and many others, inspecting code, classified document files etc regardless of security markings, on pretty much any hardware the company owns or maintains. It’s already there, and if there was a leak you would likely not know about it until way too late (specifically looking at you, MS Copilot), and such an event would be a much bigger possible threat than the occasional use of AI for a specific purposes with properly trained staff.
All these modern security tools are entirely based on AI or AI input / processing, and all will suffer the same issue of possible data leakage, one way or the other.
Let’s be very clear about something here:
Any tool that claims it will not be using the customer data, is simply marketing hype and lies, because if they did not, they would soon find themselves out of business as they would not be able to follow the evolvement of code and security threats, compared to their competitors. All the talk about “secure models” etc, is marketing fluff. Where do you think their current training material actually comes from?
Hint: They didn’t invent it…
If you deploy any of these AI security tools for wholesale scanning of the company IP, it makes absolutely no sense to at the same time unconditionally ban the use of AI’s for the developers or other creative staff, because as mentioned before, staff training on proper use is the absolute key here, and a kneejerk ban because you’re afraid of possible unknowns, is absolutely NOT the answer, as all you will achieve is to create an unsafe skunkworks project. Like it or not. It’s reality…
Takehomes for the security team:
Deal with it!
The only reasonable thing you can do at this point, is to accept “defeat”, just as you eventually had to with the emergence of the internet, and train your staff in the reasonable use, protecting the company ip and personal data, making sure that security is covered by providing working guidelines of do’s and don’ts, allowing an agreed controlled use rather than the chaotic underground skunkworks model that otherwise will emerge regardless of what you say, and over which you will have absolutely no control.
Never mind the fact that you will effectively “outlaw” most, if not all modern developer IDE’s, which… is commonly based on AI support, in part or full using their code models, relegating them back to notepad or similar “development” tools.
Trying to ban the use of AI, will be as effective as the 1920’s prohibition was… (NOT!).
Then what?
You should consider specific services (and I am not plugging anyone here) like ChatGPT’s enterprise model, where you can actually get the benefits and control security/privacy, yet, prevent any leakage and reuse for training.
If you can save an hour a day per dev, increaseing the productivity of them, this will be an easy expenditure for you to qualify the benefit of, where you gain control over what is done, how it’s done, who does it, on what basis they do it. It’s a dual win-win that will gain acceptance.
If you can’t beat them, be pragmatic and join them, making sure it’s done responsibly…