Posted on

Application rewrites?

Legacy applications…

… the regular pain in everyone’s back.

Do you have a legacy application application that needs updating or even a rewrite?

The failure of application rewrites often involves a combination of factors, and among these are the following common culprits and factors, being the primary reasons for the failure – not the language it is written in, or the conversion from one language to another or that language not being able to do the work of the other language.

Any turing complete language could do the job, literally, even such a hellish language as brainf*ck..

This shows that the language in itself, is not the problem, and it more often than not comes down to being a case of – what is the right tool to do the job?

I prefer working with Go, as it is a modern language that works equally well on almost any platform, and it is fast to develop and get working results in.
It also has the benefit of being close enough to many other languages, that devs of those, can understand it without any issues.

Also, Go, does typically not really have the inherent issue that many other languages like Python, Java or C++ suffers from with the “legacy library hell”, as it has a modern take on this, with efficient ways of keeping up to date through various mechanisms.

So what IS the big issues then?

Let’s say we start with a classic “problem child” of yesteryears – COBOL, symbolizing the language landscape of more “mature” languages, and using this as an example, as the very same core issues can be applied to pretty much any language.

It really doesn’t matter what the language is, but the underlying problems are commonly the same for all legacy applications.
Lets look at a few key points. 

1. Lack of Documentation and Knowledge

  • Legacy systems like COBOL often lack detailed, up-to-date documentation. Over time, original developers may leave, and institutional knowledge is lost.
    This often enough comes from the claim that code is the documentation in itself. This has never been true, nor will ever be true.
    The the code is merely the implementation of the specification and what you did, and never the documentation itself, this specifically as the code itself never describes the original intent of what you set out to do. I will accept documentation to be in the code on the condition that, the comment proceeding the code block explains your Intent and what you plan to achieve, returns etc before you actually write the code. This helps maintainability, as now you can check against this common if it actually does what you said it will do.
  • Business rules, logic, and workflows are often “baked into the code” without external references, making it hard to replicate functionality correctly.
  • Another common example in many Legacy applications is the use of “Magic values”, poorly or undocumented numbers that has specific meanings, But is commonly used throughout the code and will changing such a value can have catastrophic effects. Especially where you do not expect it.

2. Underestimating System Complexity

  • These systems have grown organically over decades, often integrating with other systems and processes in undocumented or implicit ways, sometimes using protocols that no longer exists or is poorly documented by themselves, and this is specifically the often the case in the use of proprietary protocols, and even more so when custom hardware is involved, never mind the eternal curse of undocumented storage formats, and especially binary such.
  • Dependencies are not always well understood, leading to gaps in the new implementation.

3. Scope Creep and Poor Requirements Gathering

  • Stakeholders might not fully articulate all requirements or fail to prioritize them.
  • The rewrite team might inadvertently “over-simplify” or “over-engineer” the replacement, causing mismatches with actual needs.
  • While there may have been initial documentation, often new additions and rewrites will rarely or never capture the changes in documentation, as many developers still thinks “code is the documentation”.
    I have never, even until today, seen documentation being a priority to any greater extent in any commercial Products, with the exception of mission critical systems such as aerospace, oil industry, nuclear, and to some degree medical or similar, and even thenThis is often not by any other means then force of regulation. From what I have seen over my years, so far, anything in the financial industry, looks more like a joke, than anything else.

4. Mismatch Between New and Existing Systems

  • COBOL systems often interact with old, niche hardware and protocols that are difficult to replicate or interface with modern platforms.
    See my previous point.
  • Rewrites might inadvertently introduce performance bottlenecks or fail to handle edge cases that the legacy system managed, and again this is often down to poorly understood original requirements and specifications that may not even original requirements and specifications that may not even be available anymore, together with the fact that many developers simply will not together with the fact that many developers simply will not sit down and read such documentation to actually understand what the code does originally.
    On a commercial side, there is rarely time allocated for any of this anyway, and you end up paying for it over and over again often to costs exceeding what it would have taken to allocate the time initially, doing it as right as you can, from start

5. Cultural and Organizational Resistance

  • Organizations often resist change, especially when it involves mission-critical systems.
  • Lack of buy-in from stakeholders or fear of disrupting operations hampers the process.

6. Testing Challenges

  • Legacy systems often run for years without interruption, with real-time updates and transactions.
    This concept often introduces the fact that you actually have no clear understanding of what’s actually and really running in the machine, especially when hot patches has been applied.
    While it can be simple enough for small systems, with bigger systems the complexity often grows exponentially.
  • Rewriting introduces risks, and testing environments struggle to replicate the production workload, leading to missed issues.

7. Skill Gaps

  • Teams tasked with rewriting may lack knowledge of legacy systems and their quirks.
  • Similarly, COBOL developers might not be part of the rewrite team, leading to a disconnect between old and new paradigms, including lack of knowledge transfer and especially so for the original intent and meaning of certain things.

8. Cost and Time Overruns

  • Rewrites frequently underestimate the effort required, both in terms of budget and time.
    This is often down to poor pre-analysis and understanding of the complexity of the task.
    A rewrite is almost always more complex than writing a new application from start, because of all the hidden complexities.
  • Incremental delays add up, and as costs mount, projects are abandoned or deemed infeasible.

9. Failure to Preserve Legacy Business Logic

  • Legacy systems encode decades of evolving business logic.
  • Translating this logic accurately to new systems without introducing errors is extremely challenging.
  • As a consultant on such matters, I often come to the point where the recommendation will be to simply start over, writing the functionality from a clean start, based on the existing perception of, and the requirements for the business logic.
    For such projects documentation (specification, documentation, and intent comments in the code) is always a high priority for future maintainability.

Key Point: Lack Of Documentation Amplifies All Other Problems

When documentation is lacking, every other issue is compounded:

  • Reverse engineering logic consumes enormous time and resources, and the risk some missing quirks, hidden behaviors etc becomes increasingly large as the by the complexity in size of the application.
  • Testing becomes harder because edge cases are unknown.
  • Training new developers is significantly more difficult.

Solution Approaches?

  • Incremental Modernization:
    Instead of a full rewrite, gradually modernize and refactor specific components.
    Where possible, break out the individual piece and serve it in a new setting.
    One issue at a time.
  • Automated Code Analysis:
    Use tools to extract business logic and system dependencies.
    This is one of the things where AI tools can actually make sense, to capture very complex logic behaviors, putting them into simpler words and shortened versions, that’s easier to understand, Giving the developer a head-start understanding of what they are looking at.
  • Collaborative Teams:
    Combine legacy system experts with modern tech specialists.
    Not leaving the “legacy teams” behind. They can be absolute key to your success of the rewrite!
  • Prioritize Documentation:
    All documentation should focus on practical maintainability. 

    Document as much as possible before starting the rewrite and throughout the process.
    The documentation and specification is, in the end the benchmark and test specification, to which you measure “are we there yet”?
    Do we have the correct and expected behavior?
    Also, for the documentation, don’t overdo it.
    There is a very valuable balance between detail and general overview.
    The specificiations should be the absolute, non-negotiable requirements.
    The maintenance documentation, should be practical how-to’s with necessary examples and details.
    The protocols and data items should be explained in detail, as this is the basis of all logic.
    The data flows should be made clear between the components.NB!!
    The code is the explicit implementation – NOT documentation.
    I am sorry, but if you claim that the code IS the documentation, you are wrong, for the very reason that anyone can read what you did, but not what you intended to do, and because of this, you should always write “intent documentation” as a block before the actual code, and before you write the actual code. This way, you or anyone else have a fightng chance of correcting mistakes made.
    This intent documentation often happens to be the same as the specification, and now, it becomes relatively easy to compare the intent to implementation, to see where the bug is.

Why Modern Languages Like Go?

  • Go offers significant advantages for rewriting legacy systems:
    • Simplicity: A minimalistic design reduces complexity, making it easier for teams to adopt.
    • Concurrency: Built-in support for concurrency enables efficient handling of modern workloads.
    • Performance: Go’s compiled nature ensures high performance, rivaling C/C++ in many scenarios.
    • Deployment: Go’s single-binary model simplifies deployment processes, especially in cloud-native environments.

Conclusion

Rewriting an application is a significant undertaking, but with careful preparation, stakeholder alignment, and the use of modern tools and languages, it can transform outdated systems into robust, efficient platforms. By addressing risks head-on and employing best practices, organizations can successfully modernize their applications while minimizing disruption and maximizing value.

Do you have legacy applications that needs reworked, modernized or documented?
.. all while using modern tools, technologies, and keeping future maintainability and support in mind?

Let’s talk.

Posted on

Auto-update your Go?

So you want to keep your golang up to date at all times?

Add this to  /bin/go-update, and stick it in your crontab as a daily job, and you will always be up to date.
Rework as needed for your favourite Linux/os distro..

#!/bin/bash
cd /tmp
CVERSION="$(curl -s https://go.dev/VERSION?m=text | grep -o 'go[0-9.]*')"
wget "https://go.dev/dl/${CVERSION}.linux-amd64.tar.gz"
rm -rf /usr/local/go
tar -C /usr/local -xzf "${CVERSION}.linux-amd64.tar.gz"
rm "${CVERSION}.linux-amd64.tar.gz"
go version

Njoy!!

Posted on

Cloudflare Domain Proxy with port targets?

Scenario(s): 

You have one or more of the following problems to solve;

  • You are an iGaming provider, that needs quickly interchangeable domains in countries like Indonesia to work.
  • You need an additional domain to hit your existing HTTPS target, but can’t run multiple SSL certs.
  • You need to map a call to a direct port on the target, yet, still use CF functionality without the need for custom ports?
  • You want cheaper SSL termination for a whole host of endpoint domains leading to a single target?
  • any other similar case or need.

You need:

  • An easily configurable Cloudflare worker domain proxy.
  • A worker path setup on the domain.

Here is the step by step solution to the problem:

1) Create the CF worker and name it. 

addEventListener('fetch', event => {
    event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
    // Define the subdomain to port and domain mapping
    const subdomainPorts = {
        'script-name':   { port: '443',  domain: 'realtarget.com' },
        'subdomain1':    { port: '443',  domain: 'realtarget.com' },
        'subdomain2':    { port: '1201', domain: 'realtarget.com' },
         ...
        'subdomain9':   { port: '1209', domain: 'realtarget.com' },
    };

    // Get the URL of the incoming request
    const url = new URL(request.url);
    url.protocol = 'https:' ; // Ensure HTTPS on target.
    url.port = '443'; // Default to standard HTTPS port if not found

    // Break the hostname into parts
    const hostnameParts = url.hostname.split('.');

    // Assume the first part of the hostname is the first subdomain
    let firstSubdomain = hostnameParts[0];

    // Check if the first subdomain is in the subdomainPorts mapping
    if (firstSubdomain in subdomainPorts) {
        // Construct new hostname using the first subdomain and target domain
        url.hostname = `${firstSubdomain}.${subdomainPorts[firstSubdomain].domain}`;
        url.port = subdomainPorts[firstSubdomain].port;
    } else {
        // Handle cases where subdomain is not defined in the mapping - default domain or handle as needed
        url.hostname = firstSubdomain + '.realtarget.com'; // Default domain if subdomain is not found
    }

    // Disable the line if you don't want logging. 
    console.log(JSON.stringify(url)) ;

    // Create a new request by cloning the original request to preserve all headers, method, body, etc.
    const newRequest = new Request(url, request);
    // Fetch the response from the new URL
    const response = await fetch(newRequest);
    // Return the response to the client
    return response;
}

2) On the domain DNS settings:

  • Make sure the domain (realtarget.com) itself has a A record going somewhere.
  • Add a CNAME for each of the subdomains, pointing to the domain target.
    Ie:  subdomain1 IN CNAME realtarget.com

3) Caching for targets:

Under “Caching” –> “Configuration”,  Set the caching level to “Standard”.

4) Setting up the worker path:

Under “Workers Routes”, click create “Add route”
and enter *.<newdomain.com>/* as the capture path, and select your worker to handle it.

Done!

What will happen next when you use your new shiny domain “foo.com”, is:

The client types in the new shiny domain https://subdomain1.foo.com/path?args…

  1. The script will strip off everything after the first subdomain (subdomain1).
  2. It will replace the domain with realtarget.com, and map the port to 1201, effectively making
    https://subdomain1.foo.com/path?args… appear as:
    https://subdomain1.realtarget.com:1201/path?args… keeping all the headers, body, arguments and whatnot as is, making both client and the final target happy,
    and you only need a single certificate for the target host, that can even be a long-life self-signed certificate,
    using the CF as the certificate front.

 

or, in a picture (a drawio diagram).

Enjoy!

 

Posted on

GPS Location?

Need a routine to determine if a lat/long is inside or outside a specific area?

Here’s a Golang routine for this, that can be easily adopted to any other language.

/*
 * Free to use as you see fit. 
 */

package GPS

type polygon [][]float64

// Enter the lat,long coordinates (min 3)
// poly := [][]float64{ { 1,1 }, { 1,2 }, { 2,2 }, { 2,1 } ... {1,1}}
// Make sure the polygon coordinates are specified in the correct order,
// typically in a counter-clockwise direction, and that the last vertex is
// the same as the first one to close the polygon.
// in := inside(1.5, 1.5, poly) --> True
// in := inside(2.5, 1.5, poly) --> False

// Test if a GPS coordinate is inside the bounding box
func Inside(latitude float64, longitude float64, boundary polygon) bool {
    inside := false

    j := len(boundary) - 1
    for i := 0; i < len(boundary); i++ {

       xi := boundary[i][0]
       yi := boundary[i][1]

       xj := boundary[j][0]
       yj := boundary[j][1]

       // Crossing the border of the polygon?
       intersect := ((yi > latitude) != (yj > latitude)) && 
                    (longitude < (xj-xi)*(latitude-yi)/(yj-yi)+xi)

       if intersect {
          inside = !inside
       }
       j = i
    }

    return inside
}

 

Posted on

The Noble 8-fold path of development

…. or maybe not so noble,
but more a tactical assault on the problem.

  1. Slap some stuff together
  2. Understand what you did, what it does, and what it should do.
    If you don’t or it doesn’t, revert to 1…
  3. Fix the remains so it does what it was supposed to do,
    in a passable fashion.
  4. Run it by some innocent victim (aka guinea pig or co-worker),
    and see about their reaction.

    If bad, revert to step 3.
  5. Prettify, if required…(trust me, it is.)
  6. Do a QA / code review with your peers (guinea pigs),
    and when the number of Whiskey Tango Foxtrots/minute
    goes below 1, you are generally safe to proceed.
  7. Release the product.
  8. Duck/Hide under the table, wait for the client fallout and bugs to be reported.
Posted on

When things go untested…

This shows the importance of fundamental testing of code in Dev and Staging, BEFORE pushing to prod,
no matter the urgency, unless you are absolutely sure it will work and it is an emergency, or, of course,
you are out of options and ready to take the risk of burning down the house…

What could possibly go wrong, right?