Posted on

Auto-update your Go?

So you want to keep your golang up to date at all times?

Add this to  /bin/go-update, and stick it in your crontab as a daily job, and you will always be up to date.
Rework as needed for your favourite Linux/os distro..

#!/bin/bash
cd /tmp
CVERSION="$(curl -s https://go.dev/VERSION?m=text | grep -o 'go[0-9.]*')"
wget "https://go.dev/dl/${CVERSION}.linux-amd64.tar.gz"
rm -rf /usr/local/go
tar -C /usr/local -xzf "${CVERSION}.linux-amd64.tar.gz"
rm "${CVERSION}.linux-amd64.tar.gz"
go version

Njoy!!

Posted on

Crontab cheat sheet

For all the Linux admins out there – Add this to the header of all your crontabs.
… and it becomes a lot clearer to anyone reading them…

# *   *   *   *   *      command_to_be_executed
# -   -   -   -   -
# |   |   |   |   |
# |   |   |   |   +----- day of the week (0 - 6) (Sunday=0)
# |   |   |   +--------- month (1 - 12)
# |   |   +------------- day of the month (1 - 31)
# |   +----------------- hour (0 - 23)
# +--------------------- min (0 - 59)
#
# Asterisk    (*)  any value
# Comma       (,)  value list separator (0,20,30,45)
# Dash        (-)  range of values (8-17)
# Slash       (/)  steps values (*/20)
#
# @reboot     Run once, at startup
# @yearly     Run once a year,       "0 0 1 1 *"
# @annually   Same as @yearly
# @monthly    Run once a month,      "0 0 1 * *"
# @weekly     Run once a week,       "0 0 * * 0"
# @daily      Run once a day,        "0 0 * * *"
# @hourly     Run once an hour,      "0 * * * *"

Njoy!

Posted on

Cloudflare Domain Proxy with port targets?

Scenario(s): 

You have one or more of the following problems to solve;

  • You are an iGaming provider, that needs quickly interchangeable domains in countries like Indonesia to work.
  • You need an additional domain to hit your existing HTTPS target, but can’t run multiple SSL certs.
  • You need to map a call to a direct port on the target, yet, still use CF functionality without the need for custom ports?
  • You want cheaper SSL termination for a whole host of endpoint domains leading to a single target?
  • any other similar case or need.

You need:

  • An easily configurable Cloudflare worker domain proxy.
  • A worker path setup on the domain.

Here is the step by step solution to the problem:

1) Create the CF worker and name it. 

addEventListener('fetch', event => {
    event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
    // Define the subdomain to port and domain mapping
    const subdomainPorts = {
        'script-name':   { port: '443',  domain: 'realtarget.com' },
        'subdomain1':    { port: '443',  domain: 'realtarget.com' },
        'subdomain2':    { port: '1201', domain: 'realtarget.com' },
         ...
        'subdomain9':   { port: '1209', domain: 'realtarget.com' },
    };

    // Get the URL of the incoming request
    const url = new URL(request.url);
    url.protocol = 'https:' ; // Ensure HTTPS on target.
    url.port = '443'; // Default to standard HTTPS port if not found

    // Break the hostname into parts
    const hostnameParts = url.hostname.split('.');

    // Assume the first part of the hostname is the first subdomain
    let firstSubdomain = hostnameParts[0];

    // Check if the first subdomain is in the subdomainPorts mapping
    if (firstSubdomain in subdomainPorts) {
        // Construct new hostname using the first subdomain and target domain
        url.hostname = `${firstSubdomain}.${subdomainPorts[firstSubdomain].domain}`;
        url.port = subdomainPorts[firstSubdomain].port;
    } else {
        // Handle cases where subdomain is not defined in the mapping - default domain or handle as needed
        url.hostname = firstSubdomain + '.realtarget.com'; // Default domain if subdomain is not found
    }

    // Disable the line if you don't want logging. 
    console.log(JSON.stringify(url)) ;

    // Create a new request by cloning the original request to preserve all headers, method, body, etc.
    const newRequest = new Request(url, request);
    // Fetch the response from the new URL
    const response = await fetch(newRequest);
    // Return the response to the client
    return response;
}

2) On the domain DNS settings:

  • Make sure the domain (realtarget.com) itself has a A record going somewhere.
  • Add a CNAME for each of the subdomains, pointing to the domain target.
    Ie:  subdomain1 IN CNAME realtarget.com

3) Caching for targets:

Under “Caching” –> “Configuration”,  Set the caching level to “Standard”.

4) Setting up the worker path:

Under “Workers Routes”, click create “Add route”
and enter *.<newdomain.com>/* as the capture path, and select your worker to handle it.

Done!

What will happen next when you use your new shiny domain “foo.com”, is:

The client types in the new shiny domain https://subdomain1.foo.com/path?args…

  1. The script will strip off everything after the first subdomain (subdomain1).
  2. It will replace the domain with realtarget.com, and map the port to 1201, effectively making
    https://subdomain1.foo.com/path?args… appear as:
    https://subdomain1.realtarget.com:1201/path?args… keeping all the headers, body, arguments and whatnot as is, making both client and the final target happy,
    and you only need a single certificate for the target host, that can even be a long-life self-signed certificate,
    using the CF as the certificate front.

 

or, in a picture (a drawio diagram).

Enjoy!

 

Posted on

UDM Pro and SSL

So you have a Ubiquiti Dream Machine Pro (UDM pro) box, and you want to install SSL certificates?

This goes for the OS Version 3.2+

This is quite straightforward in a few single steps.

  1. Enable SSH login in the machine.
  2. Connect by SSH using “admin” and your password to the machine.
  3. do a
    cd /data/unifi-core/config
  4. In there, do a backup:
    tar zcvf backup.tgz *
    and download this file (sftp / scp).
    scp [email protected]:/data/unifi-core/config/backup.tgz .
  5. in there, you should find the following files: 
    unifi-core-direct.crt
    unifi-core-direct.key
    unifi-core.crt
    unifi-core.key
  6. Make a copy of your SSL key, and rename it as unifi-core.key and unifi-core-direct.key
  7. Create a new file called unifi-core.crt, and in this file, you copy in your certificate
    followed by the root CA bundle from your certificate issuer, such as :
    <certificate_file>
    <bundle_file>
    and save it, then copy the file unifi-core.crt to unifi-core-direct.crt 

    Here’s the command line steps to create the files for all above:
    cat cert.key > unifi-core.key
    cat cert.key > unifi-core-direct.key
    cat cert.pem > unifi-core.crt
    echo "" >> unifi-core.crt
    cat cert.ca-bundle.pem >> unifi-core.crt
    cp unifi-core.crt unifi-core-direct.crt

  8. Upload the files (sftp/scp) to the folder /data/unifi-core/config
    scp unifi-core-* [email protected]:/data/unifi-core/config/
  9. On your UDM pro, issue the command:
    systemctl restart unifi-core
    You should now be able to connect to the machine using the https and certificate.
    Note that you may need to point out the address in your DNS, or add the IP in your lmhosts/hosts file,
    such as 192.168.0.1 gw.<domain.tld>

That should be it, and you should have a working SSL certificate on the box.
Note that updates of the OS, may reset the files, so keep them handy.

Good luck!

 

Posted on

Some thoughts on the future concept of soft hardware..

Imaginary Concept image illustrating a concept of reconfigurable computing using current and future concepts.

I have been considering hardware solutions for many years, since around the late 80’s, designing some of them, playing around with even more.

I have worked on, and advised on research in reconfigurable computing to other research project in the same and similar areas, played simulated scenarios on virtual self-generated parallel processing units (PPU’s) that are in a way similar to cpu’s, but differ in the way that while they have a risc-like basic set of instructions, they are self-generated to transfer complex and heavy tasks into discrete hardware, as well as able to have a on the fly reconfigurable and extendable instruction set, accelerating the speed of computing from clocked sequential solutions to discrete clocked or free-flowing deterministic logic, yielding speed improvements over traditional processing, often by multiple magnitudes.

Couple this with the modern like of and demand for parallelism and multithreading, and think, what if we had a simpler PPU, where we could throw our application at something, where we could create arbitrary complex instructions, that would be automatically translated into hardware, and we hade access to thousands of [discrete hardware] threads, hundreds or even thousands of them, all running on PPU’s, that could even offer true cycle by cycle parallel processing without the penalty of the context switching in a traditional cpu?

There is a lot of talk about modern varieties of things like CPU vs GPU vs DPU vs TPU’s today, but just what if, we had a merger, where we had a RCU that would incorporate components of all of them, allowing for massively, scalable, parallel translation of software into discrete hardware solutions, by itself, on demand?

Imagine a scenario where writing a software no longer means executing static code as in the classic primary form of a set of sequential steps, but transformed into a combination of classic code and discrete deterministic logic, composing of all of the above technologies (and new upcoming ones) in combination, and to top it off, have the machine itself analyze the performance of the solution, both software and hardware, to find better and more efficient ways of doing the job, coming up with a faster solution by itself, generating new code and reconfiguring itself to be more efficient?
Welcome to the concept of the RCU. (no, not that classic Read-Copy-Update concept…)

For the future, I see a merger of all these aforementioned components into the “RCU” – a Reconfigurable Compute Unit, which is no longer sets of distinct types of computing solutions, but where different kinds of compute solutions are merged into a single unit, and elements of the different technologies are called upon and utilized by the technology itself, and it’s own behavioral and performance analysis which could in itself very well be driven by generative AI solutions, will find new ways to make it more efficient, continuously.

After all, turning software into hardware, is nothing new, and it’s not rocket science – it’s very well understood and commonly utilized concepts, but what is new, is making the hardware build itself to it’s needs to gain the performance, incrementally, by analyzing itself, not only by way of discrete logic, but new, smarter instructions, created on the fly, based on the need of the software?

Such tasks and problems, are commonly not massively compute-heavy tasks, but relatively simple tasks, just like most everyday computational tasks, ones that can be served by relatively simple and low powered solutions. What makes most task go fast, is either massive parallelism, or, where not suitable, clever solutions where you don’t have to rely on steps, but solution flows.

In the scenario of the RCU, You, as a developer, could focus on simply getting a functional solution to the problem, and let the machine take care of the solution analysis and optimization. This, could also be coupled with adaptive descriptive problem to solution generation, as we are entering the era of where this is now both technically and practically feasible.

This has been a research journey in both thought and action since around 1990, and it is still ongoing at EmberLabs.

Are you ready for it and what is coming?
Are you ready to bite?

If you want to know more and possibly collaborate, we can talk.

Posted on

GPS Location?

Need a routine to determine if a lat/long is inside or outside a specific area?

Here’s a Golang routine for this, that can be easily adopted to any other language.

/*
 * Free to use as you see fit. 
 */

package GPS

type polygon [][]float64

// Enter the lat,long coordinates (min 3)
// poly := [][]float64{ { 1,1 }, { 1,2 }, { 2,2 }, { 2,1 } ... {1,1}}
// Make sure the polygon coordinates are specified in the correct order,
// typically in a counter-clockwise direction, and that the last vertex is
// the same as the first one to close the polygon.
// in := inside(1.5, 1.5, poly) --> True
// in := inside(2.5, 1.5, poly) --> False

// Test if a GPS coordinate is inside the bounding box
func Inside(latitude float64, longitude float64, boundary polygon) bool {
    inside := false

    j := len(boundary) - 1
    for i := 0; i < len(boundary); i++ {

       xi := boundary[i][0]
       yi := boundary[i][1]

       xj := boundary[j][0]
       yj := boundary[j][1]

       // Crossing the border of the polygon?
       intersect := ((yi > latitude) != (yj > latitude)) && 
                    (longitude < (xj-xi)*(latitude-yi)/(yj-yi)+xi)

       if intersect {
          inside = !inside
       }
       j = i
    }

    return inside
}

 

Posted on

Pink October Talk @ Glitnor Group

Chris Sprucefield
Pink October talk at Glitnor Group.

Breast cancer…

There’s four dreaded words that you as a guy don’t ever want to hear, and no woman ever want to say –
“I found a lump”.

To me, us, this happened quite some time ago, end of 90’s, and I’m happily remarried since, but my story goes a long way, and it still goes to show how fast it can happen and develop like the lightning strike out of nowhere.

While the research has come quite some way, there’s still no cure, and it’s an illness that leaves no one untouched. It’s evil. It affects everyone involved, in ways you can not imagine, until it happens, and even then, you end up being stumped and lost, even as a “bystander”.

It started with a lump, she got treatment, and then it looked all good and well for some time. We moved countries, and one day, she said – I can’t hear on my left ear. Hospital, and it had returned, 4 years later, now, metastatic, and from there, it was all downhill, and she eventually passed away.

Luckily, today, this will not be the case for everyone, as treatments has gotten way better, but the treatments is not all.
There’s many more aspects to it than just treatments…

One of the worst part for us, was long time friends disappearing, because they “could not deal with cancer”, or hospitals. They just tried to find any excuse to bail out, with the exception of a very few that stuck around. If you are a true friend of someone, don’t be that person and take the bailout card – be the one that cares, even if just in the small things…

Here’s some of what you can do, even if you don’t have much time or resources, that will be gold dust to victims of cancer (in general)

  • Please, Warn the person going for the chemo, not to eat the usual things they like or love afterwards, avoid them for the time being, or you may end up not being able to eat them afterwards, as chemo can / will play tricks with your mind… nobody tells you this, and we wished someone had…
  • They got kids?
    Apart from just life – just like you, we are all busy, but they also have to deal with the effects of cancer and treatments, apart from the existential crisis they inevitably go through, and it is wearing people down. Offer their kid(s) the occasional sleepover, or just to take them off them even for a day/evening.
    They need adult time to cope, or even just peace and quiet. You have no idea what the value of such a small gesture is or may be…
  • Chemo?
    They just got back… They still need to eat and so on… Going to the shop?
    Ask them – I’m going shopping, you need any top-up stuff while i’m there?
    That top-up shop delivered, may be the difference between having to do a shop the day after the chemo, or 2 days later when you feel better and you can actually cope with it, because, we are all people, and we forget things, until we need it…
  • Getting to / from the hospital? If you can, take them there and pick up. Get a paper “flight sickness bag”, just in case. Just keep it in the car. you know why…
    Take that one worry out of their life if you can, they have enough existential stuff on their minds already, and that is gold, and they know they are safe with you…

The practical little things…

Sometimes, people being people, things can get too much for anyone, and if they cuss you out, having a particularly bad day, don’t take it personally – they are likely just exhausted and need to vent, and you just happen to be in the firing line for just being their trusted friend, because they feel comfortable and safe around you, so take it, just listen and don’t let it get to you.
Really – let it go in one ear and out the other… I know it’s hard, but you are the true friend, and for good reason.
They will be embarrassed enough afterwards about what happened, but be there the next time… the true friend you are.

Everyone has ups and downs. They just have some more of it, right now – the downs, and the feelng of world being against them.

Be the brave one for them – they need you and your strength as their guidance, and your presence as the anchor to reality.
Think about it – It’s not really hard things, it’s not big things, even small things count big time, but just being around, for a call, for a coffee etc.

It’s quality of life, the sense of normality that brings hope and future back into the picture. You just being you as usual, and most importantly, being there, counts.Now, you can do the above, and you can also donate to research to make this problem eventually become what we all want – extinct.Take care out there!

#PinkOctober #BreastCanscerSpouse #BreastCancer #BreastCancerAwareness

Posted on

The Noble 8-fold path of development

…. or maybe not so noble,
but more a tactical assault on the problem.

  1. Slap some stuff together
  2. Understand what you did, what it does, and what it should do.
    If you don’t or it doesn’t, revert to 1…
  3. Fix the remains so it does what it was supposed to do,
    in a passable fashion.
  4. Run it by some innocent victim (aka guinea pig or co-worker),
    and see about their reaction.

    If bad, revert to step 3.
  5. Prettify, if required…(trust me, it is.)
  6. Do a QA / code review with your peers (guinea pigs),
    and when the number of Whiskey Tango Foxtrots/minute
    goes below 1, you are generally safe to proceed.
  7. Release the product.
  8. Duck/Hide under the table, wait for the client fallout and bugs to be reported.
Posted on

Nasa Software Catalogue 2023/2024

I just thought i would share this little gem with you all!

I if you are into engineering, research, development, this may be of interest to you as well – the open catalogue of software in a wide range of areas.

“The 2023-2024 Software Catalog is Here!
Each year, NASA scientists, engineers, and developers create software packages to manage space missions, test spacecraft, and analyze the petabytes of data produced by agency research satellites. As the agency innovates for the benefit of humanity, many of these programs are now downloadable and free of charge through NASA’s Software Catalog.”

https://software.nasa.gov/

Enjoy!

Posted on

When things go untested…

This shows the importance of fundamental testing of code in Dev and Staging, BEFORE pushing to prod,
no matter the urgency, unless you are absolutely sure it will work and it is an emergency, or, of course,
you are out of options and ready to take the risk of burning down the house…

What could possibly go wrong, right?