Posted on

Cloudflare Domain Proxy with port targets?

Scenario(s): 

You have one or more of the following problems to solve;

  • You are an iGaming provider, that needs quickly interchangeable domains in countries like Indonesia to work.
  • You need an additional domain to hit your existing HTTPS target, but can’t run multiple SSL certs.
  • You need to map a call to a direct port on the target, yet, still use CF functionality without the need for custom ports?
  • You want cheaper SSL termination for a whole host of endpoint domains leading to a single target?
  • any other similar case or need.

You need:

  • An easily configurable Cloudflare worker domain proxy.
  • A worker path setup on the domain.

Here is the step by step solution to the problem:

1) Create the CF worker and name it. 

addEventListener('fetch', event => {
    event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
    // Define the subdomain to port and domain mapping
    const subdomainPorts = {
        'script-name':   { port: '443',  domain: 'realtarget.com' },
        'subdomain1':    { port: '443',  domain: 'realtarget.com' },
        'subdomain2':    { port: '1201', domain: 'realtarget.com' },
         ...
        'subdomain9':   { port: '1209', domain: 'realtarget.com' },
    };

    // Get the URL of the incoming request
    const url = new URL(request.url);
    url.protocol = 'https:' ; // Ensure HTTPS on target.
    url.port = '443'; // Default to standard HTTPS port if not found

    // Break the hostname into parts
    const hostnameParts = url.hostname.split('.');

    // Assume the first part of the hostname is the first subdomain
    let firstSubdomain = hostnameParts[0];

    // Check if the first subdomain is in the subdomainPorts mapping
    if (firstSubdomain in subdomainPorts) {
        // Construct new hostname using the first subdomain and target domain
        url.hostname = `${firstSubdomain}.${subdomainPorts[firstSubdomain].domain}`;
        url.port = subdomainPorts[firstSubdomain].port;
    } else {
        // Handle cases where subdomain is not defined in the mapping - default domain or handle as needed
        url.hostname = firstSubdomain + '.realtarget.com'; // Default domain if subdomain is not found
    }

    // Disable the line if you don't want logging. 
    console.log(JSON.stringify(url)) ;

    // Create a new request by cloning the original request to preserve all headers, method, body, etc.
    const newRequest = new Request(url, request);
    // Fetch the response from the new URL
    const response = await fetch(newRequest);
    // Return the response to the client
    return response;
}

2) On the domain DNS settings:

  • Make sure the domain (realtarget.com) itself has a A record going somewhere.
  • Add a CNAME for each of the subdomains, pointing to the domain target.
    Ie:  subdomain1 IN CNAME realtarget.com

3) Caching for targets:

Under “Caching” –> “Configuration”,  Set the caching level to “Standard”.

4) Setting up the worker path:

Under “Workers Routes”, click create “Add route”
and enter *.<newdomain.com>/* as the capture path, and select your worker to handle it.

Done!

What will happen next when you use your new shiny domain “foo.com”, is:

The client types in the new shiny domain https://subdomain1.foo.com/path?args…

  1. The script will strip off everything after the first subdomain (subdomain1).
  2. It will replace the domain with realtarget.com, and map the port to 1201, effectively making
    https://subdomain1.foo.com/path?args… appear as:
    https://subdomain1.realtarget.com:1201/path?args… keeping all the headers, body, arguments and whatnot as is, making both client and the final target happy,
    and you only need a single certificate for the target host, that can even be a long-life self-signed certificate,
    using the CF as the certificate front.

 

or, in a picture (a drawio diagram).

Enjoy!

 

Posted on

UDM Pro and SSL

So you have a Ubiquiti Dream Machine Pro (UDM pro) box, and you want to install SSL certificates?

This goes for the OS Version 3.2+

This is quite straightforward in a few single steps.

  1. Enable SSH login in the machine.
  2. Connect by SSH using “admin” and your password to the machine.
  3. do a
    cd /data/unifi-core/config
  4. In there, do a backup:
    tar zcvf backup.tgz *
    and download this file (sftp / scp).
    scp [email protected]:/data/unifi-core/config/backup.tgz .
  5. in there, you should find the following files: 
    unifi-core-direct.crt
    unifi-core-direct.key
    unifi-core.crt
    unifi-core.key
  6. Make a copy of your SSL key, and rename it as unifi-core.key and unifi-core-direct.key
  7. Create a new file called unifi-core.crt, and in this file, you copy in your certificate
    followed by the root CA bundle from your certificate issuer, such as :
    <certificate_file>
    <bundle_file>
    and save it, then copy the file unifi-core.crt to unifi-core-direct.crtHere’s the command line steps to create the files for all above:
    cat cert.key > unifi-core.key
    cp ubiqity-core.crt unifi-core-direct.key
    cat cert.crt > unifi-core.crt
    echo "" >> unifi-core.crt
    cat cert.ca-bundle >> unifi-core.crt
    cp ubiqity-core.crt unifi-core-direct.crt
  8. Upload the files (sftp/scp) to the folder /data/unifi-core/config
    scp unifi-core-* [email protected]:/data/unifi-core/config/
  9. On your UDM pro, issue the command:
    systemctl restart unifi-core
    You should now be able to connect to the machine using the https and certificate.
    Note that you may need to point out the address in your DNS, or add the IP in your lmhosts/hosts file,
    such as 192.168.0.1 gw.<domain.tld>

That should be it, and you should have a working SSL certificate on the box.
Note that updates of the OS, may reset the files, so keep them handy.

Good luck!

 

Posted on

Some thoughts on the future concept of soft hardware..

Imaginary Concept image illustrating a concept of reconfigurable computing using current and future concepts.

I have been considering hardware solutions for many years, since around the late 80’s, designing some of them, playing around with even more.

I have worked on, and advised on research in reconfigurable computing to other research project in the same and similar areas, played simulated scenarios on virtual self-generated parallel processing units (PPU’s) that are in a way similar to cpu’s, but differ in the way that while they have a risc-like basic set of instructions, they are self-generated to transfer complex and heavy tasks into discrete hardware, as well as able to have a on the fly reconfigurable and extendable instruction set, accelerating the speed of computing from clocked sequential solutions to discrete clocked or free-flowing deterministic logic, yielding speed improvements over traditional processing, often by multiple magnitudes.

Couple this with the modern like of and demand for parallelism and multithreading, and think, what if we had a simpler PPU, where we could throw our application at something, where we could create arbitrary complex instructions, that would be automatically translated into hardware, and we hade access to thousands of [discrete hardware] threads, hundreds or even thousands of them, all running on PPU’s, that could even offer true cycle by cycle parallel processing without the penalty of the context switching in a traditional cpu?

There is a lot of talk about modern varieties of things like CPU vs GPU vs DPU vs TPU’s today, but just what if, we had a merger, where we had a RCU that would incorporate components of all of them, allowing for massively, scalable, parallel translation of software into discrete hardware solutions, by itself, on demand?

Imagine a scenario where writing a software no longer means executing static code as in the classic primary form of a set of sequential steps, but transformed into a combination of classic code and discrete deterministic logic, composing of all of the above technologies (and new upcoming ones) in combination, and to top it off, have the machine itself analyze the performance of the solution, both software and hardware, to find better and more efficient ways of doing the job, coming up with a faster solution by itself, generating new code and reconfiguring itself to be more efficient?
Welcome to the concept of the RCU. (no, not that classic Read-Copy-Update concept…)

For the future, I see a merger of all these aforementioned components into the “RCU” – a Reconfigurable Compute Unit, which is no longer sets of distinct types of computing solutions, but where different kinds of compute solutions are merged into a single unit, and elements of the different technologies are called upon and utilized by the technology itself, and it’s own behavioral and performance analysis which could in itself very well be driven by generative AI solutions, will find new ways to make it more efficient, continuously.

After all, turning software into hardware, is nothing new, and it’s not rocket science – it’s very well understood and commonly utilized concepts, but what is new, is making the hardware build itself to it’s needs to gain the performance, incrementally, by analyzing itself, not only by way of discrete logic, but new, smarter instructions, created on the fly, based on the need of the software?

Such tasks and problems, are commonly not massively compute-heavy tasks, but relatively simple tasks, just like most everyday computational tasks, ones that can be served by relatively simple and low powered solutions. What makes most task go fast, is either massive parallelism, or, where not suitable, clever solutions where you don’t have to rely on steps, but solution flows.

In the scenario of the RCU, You, as a developer, could focus on simply getting a functional solution to the problem, and let the machine take care of the solution analysis and optimization. This, could also be coupled with adaptive descriptive problem to solution generation, as we are entering the era of where this is now both technically and practically feasible.

This has been a research journey in both thought and action since around 1990, and it is still ongoing at EmberLabs.

Are you ready for it and what is coming?
Are you ready to bite?

If you want to know more and possibly collaborate, we can talk.

Posted on

GPS Location?

Need a routine to determine if a lat/long is inside or outside a specific area?

Here’s a Golang routine for this, that can be easily adopted to any other language.

/*
 * Free to use as you see fit. 
 */

package GPS

type polygon [][]float64

// Enter the lat,long coordinates (min 3)
// poly := [][]float64{ { 1,1 }, { 1,2 }, { 2,2 }, { 2,1 } ... {1,1}}
// Make sure the polygon coordinates are specified in the correct order,
// typically in a counter-clockwise direction, and that the last vertex is
// the same as the first one to close the polygon.
// in := inside(1.5, 1.5, poly) --> True
// in := inside(2.5, 1.5, poly) --> False

// Test if a GPS coordinate is inside the bounding box
func Inside(latitude float64, longitude float64, boundary polygon) bool {
    inside := false

    j := len(boundary) - 1
    for i := 0; i < len(boundary); i++ {

       xi := boundary[i][0]
       yi := boundary[i][1]

       xj := boundary[j][0]
       yj := boundary[j][1]

       // Crossing the border of the polygon?
       intersect := ((yi > latitude) != (yj > latitude)) && 
                    (longitude < (xj-xi)*(latitude-yi)/(yj-yi)+xi)

       if intersect {
          inside = !inside
       }
       j = i
    }

    return inside
}

 

Posted on

Pink October Talk @ Glitnor Group

Chris Sprucefield
Pink October talk at Glitnor Group.

Breast cancer…

There’s four dreaded words that you as a guy don’t ever want to hear, and no woman ever want to say –
“I found a lump”.

To me, us, this happened quite some time ago, end of 90’s, and I’m happily remarried since, but my story goes a long way, and it still goes to show how fast it can happen and develop like the lightning strike out of nowhere.

While the research has come quite some way, there’s still no cure, and it’s an illness that leaves no one untouched. It’s evil. It affects everyone involved, in ways you can not imagine, until it happens, and even then, you end up being stumped and lost, even as a “bystander”.

It started with a lump, she got treatment, and then it looked all good and well for some time. We moved countries, and one day, she said – I can’t hear on my left ear. Hospital, and it had returned, 4 years later, now, metastatic, and from there, it was all downhill, and she eventually passed away.

Luckily, today, this will not be the case for everyone, as treatments has gotten way better, but the treatments is not all.
There’s many more aspects to it than just treatments…

One of the worst part for us, was long time friends disappearing, because they “could not deal with cancer”, or hospitals. They just tried to find any excuse to bail out, with the exception of a very few that stuck around. If you are a true friend of someone, don’t be that person and take the bailout card – be the one that cares, even if just in the small things…

Here’s some of what you can do, even if you don’t have much time or resources, that will be gold dust to victims of cancer (in general)

  • Please, Warn the person going for the chemo, not to eat the usual things they like or love afterwards, avoid them for the time being, or you may end up not being able to eat them afterwards, as chemo can / will play tricks with your mind… nobody tells you this, and we wished someone had…
  • They got kids?
    Apart from just life – just like you, we are all busy, but they also have to deal with the effects of cancer and treatments, apart from the existential crisis they inevitably go through, and it is wearing people down. Offer their kid(s) the occasional sleepover, or just to take them off them even for a day/evening.
    They need adult time to cope, or even just peace and quiet. You have no idea what the value of such a small gesture is or may be…
  • Chemo?
    They just got back… They still need to eat and so on… Going to the shop?
    Ask them – I’m going shopping, you need any top-up stuff while i’m there?
    That top-up shop delivered, may be the difference between having to do a shop the day after the chemo, or 2 days later when you feel better and you can actually cope with it, because, we are all people, and we forget things, until we need it…
  • Getting to / from the hospital? If you can, take them there and pick up. Get a paper “flight sickness bag”, just in case. Just keep it in the car. you know why…
    Take that one worry out of their life if you can, they have enough existential stuff on their minds already, and that is gold, and they know they are safe with you…

The practical little things…

Sometimes, people being people, things can get too much for anyone, and if they cuss you out, having a particularly bad day, don’t take it personally – they are likely just exhausted and need to vent, and you just happen to be in the firing line for just being their trusted friend, because they feel comfortable and safe around you, so take it, just listen and don’t let it get to you.
Really – let it go in one ear and out the other… I know it’s hard, but you are the true friend, and for good reason.
They will be embarrassed enough afterwards about what happened, but be there the next time… the true friend you are.

Everyone has ups and downs. They just have some more of it, right now – the downs, and the feelng of world being against them.

Be the brave one for them – they need you and your strength as their guidance, and your presence as the anchor to reality.
Think about it – It’s not really hard things, it’s not big things, even small things count big time, but just being around, for a call, for a coffee etc.

It’s quality of life, the sense of normality that brings hope and future back into the picture. You just being you as usual, and most importantly, being there, counts.Now, you can do the above, and you can also donate to research to make this problem eventually become what we all want – extinct.Take care out there!

#PinkOctober #BreastCanscerSpouse #BreastCancer #BreastCancerAwareness

Posted on

The Noble 8-fold path of development

…. or maybe not so noble,
but more a tactical assault on the problem.

  1. Slap some stuff together
  2. Understand what you did, what it does, and what it should do.
    If you don’t or it doesn’t, revert to 1…
  3. Fix the remains so it does what it was supposed to do,
    in a passable fashion.
  4. Run it by some innocent victim (aka guinea pig or co-worker),
    and see about their reaction.

    If bad, revert to step 3.
  5. Prettify, if required…(trust me, it is.)
  6. Do a QA / code review with your peers (guinea pigs),
    and when the number of Whiskey Tango Foxtrots/minute
    goes below 1, you are generally safe to proceed.
  7. Release the product.
  8. Duck/Hide under the table, wait for the client fallout and bugs to be reported.
Posted on

Nasa Software Catalogue 2023/2024

I just thought i would share this little gem with you all!

I if you are into engineering, research, development, this may be of interest to you as well – the open catalogue of software in a wide range of areas.

“The 2023-2024 Software Catalog is Here!
Each year, NASA scientists, engineers, and developers create software packages to manage space missions, test spacecraft, and analyze the petabytes of data produced by agency research satellites. As the agency innovates for the benefit of humanity, many of these programs are now downloadable and free of charge through NASA’s Software Catalog.”

https://software.nasa.gov/

Enjoy!

Posted on

When things go untested…

This shows the importance of fundamental testing of code in Dev and Staging, BEFORE pushing to prod,
no matter the urgency, unless you are absolutely sure it will work and it is an emergency, or, of course,
you are out of options and ready to take the risk of burning down the house…

What could possibly go wrong, right?

 

Posted on

Tinypng script

Using the service from https://tinypng.com makes it easy to mass-shrink your PNG images to more palatable sizes, and it comes with a neat 500 free use transcodes per month, and it’s quite cheap after that.

Here’s a little script to help you with the work a bit.

Prerequisites:
bash, jq and curl.

Save the file as “tinify” and do a chmod 755 tinify

Flags:
-k <key> = API key for tinypng.com – can be omitted if specified in the environment variable by export TINIFY_API ="<api_key>"
-f <file> = Filename to compress
-r = Replace the original file with the compressed file. If not specified, the output file will be named tiny-<filename>
-v = verbose output
-s = Show compression statistics (1 line per file)

#!/bin/env bash
# (C) EmberLabs / Chris Sprucefield 2023.
# License: CC BY.

key=''
file=''
r_flag=false
v_flag=false
s_flag=false
s_arg="-s"

if [ "${TINIFY_API}" != "" ]
then
    if [ "${v_flag}" == true ] ; then echo "Using API key from env." ; fi
    key="${TINIFY_API}"
fi

while getopts 'rk:f:vsh?' flag; do
case "${flag}" in
    r) r_flag=true ;;
    k) key="${OPTARG}" ;;
    f) file="${OPTARG}" ;;
    v) v_flag=true
       s_arg="" ;;
    s) s_flag=true ;;
    *)
        echo "<cmd> -? | -h This help text"
        echo " -r Replace the original file with tinified file."
        echo " -k <apikey> The API key for tinify (or from \$TINIFY_API environment variable if set)"
        echo " -f <filename> The filename to encode (and replace if -r is specified)"
        echo " -v Verbose output"
        echo " -s Show compression statistics"
        exit 1
        ;;
    esac
done

if [ "${v_flag}" == true ] ; then echo "Processing ${file}" ; fi
JSON="$(curl ${s_arg} --user "api:${key}" --data-binary "@${file}" -i https://api.tinify.com/shrink | tail -1)"
URL="$(echo "${JSON}" | jq '.output.url' | sed 's/\"//g')"
ISIZE="$(echo "${JSON}" | jq '.input.size')"
OSIZE="$(echo "${JSON}" | jq '.output.size')"
RATIO="$(echo "${JSON}" | jq '.output.ratio')"
W="$(echo "${JSON}" | jq '.output.width')"
H="$(echo "${JSON}" | jq '.output.height')"

if [ "${ISIZE}" == "${OSIZE}" ]
then
    if [ "${v_flag}" == true ] ; then echo "No compression on ${file} - skipped." ; fi
    exit 0
fi

if [ "${URL}" != "null" ]
then
    if [ "${v_flag}" == true ] ; then echo "Fetching ${URL}" ; fi
    curl ${s_arg} "${URL}" -o "tiny-${file}"

    if [ "${r_flag}" == true ]
    then
        if [ "${v_flag}" == true ] ; then echo "Replacing original file" ; fi
        mv -f "tiny-${file}" "${file}"
    fi
    if [ "${s_flag}" == true ]
    then
        printf "%-40s %5d x %-5d In: %8d Out: %8d Ratio: %2.5f\n" "${file}" "${W}" "${H}" "${ISIZE}" "${OSIZE}" "${RATIO}"
    fi
else
    echo "Invalid response. (incorrect API key?)"
    exit 1
fi

 

Posted on

Easily mirror a BitBucket workspace?

So, you have a large pile of Bitbucket repos that you want to check out, mirror or even back up?

Here’s a shellscript for you that does the trick.

Prerequisites:

  • jq, bash and git installed on the machine.
  • You have an APP password on the bitbucket (create in personal settings)
  • You have an SSH key on the bitbucket and you can checkout via ssh.

Store the script as something like “bb” and issue “chmod 755 bb”

There are 3 commands:
“bb  get” will fetch the lists of the repos.
“bb update” will mass check out the repos if it doesn’t exist locally, or update them.
“bb backup” will check out the repos in mirror format, and make a tgz file of it all.

The checkouts will be done under the path ./<org>/<project>/<repo>

Enjoy!

#!/bin/env bash
# ###########################################################################
# (C) EmberLabs / Chris Sprucefield. 
# Licensed under CC BY | https://creativecommons.org/licenses/by/4.0/
# Basics
ORG=""  # Fill in your org name here. This is the workspace name. 
USER="" # Fill in your username here.
PASS="" # Fill in your BB APP password here.

function getRepoLists() {
    AUTH="Basic $(echo -ne "${USER}:${PASS}" | base64)"
    PAGE="1"

    echo "--------------------------------------------------------------------"
    echo "Getting the ${ORG} repository lists from BitBucket"
    BURL="https://api.bitbucket.org/2.0/repositories/${ORG}?pagelen=100"
    GO="1"
    rm -f repolist-*.json
    echo -n "Getting page "
    while [ "$GO" != "0" ]
    do
      echo -n "${PAGE} "
      curl -s -H "Accept: application/json" -H "Authorization: ${AUTH}" "${BURL}&page=${PAGE}" -o "repolist-${PAGE}.json"

      if [ "$(grep "\"next\":" repolist-${PAGE}.json)" == "" ]
      then
        GO="0"
      fi

      let PAGE=$PAGE+1

    done
    echo ""
}

function cloneRefreshRepos() {
    CWD="$(pwd)"

    for list in $(ls repolist-*.json)
    do
      echo "Processing list $list"
      O="0"
      CONT="1"
      while [ "${CONT}" == "1" ]
      do
        cd "${CWD}"
        OBJECT="$(jq ".values[${O}]" "${CWD}/$list")"

        if [ "${OBJECT}" != "null" ]
        then
          PROJECT="$(echo "$OBJECT" | jq -r '.project.name')"
          NAME="$(echo "$OBJECT" | jq -r '.name')"
          SLUG="$(echo "$OBJECT" | jq -r '.slug')"
          CLONE="$(echo "$OBJECT" | jq -r '.links.clone[1].href')"

          if [ "$1" == "mirror" ]
          then
            mkdir -p "${ORG}-Mirror/${PROJECT}"
            cd "${ORG}-Mirror/${PROJECT}"
            if [ ! -e "${SLUG}" ]
            then
              echo "----------------------------------------------------------"
              git clone --mirror "${CLONE}"
              echo ""
            fi
          else
            mkdir -p "${ORG}/${PROJECT}"
            cd "${ORG}/${PROJECT}"
            if [ ! -e "${SLUG}" ]
            then
              echo "----------------------------------------------------------"
              echo "Cloning ${NAME}"
              git clone "${CLONE}"
              echo ""
            else
              cd "${SLUG}"
              echo "----------------------------------------------------------"
              echo "Updating ${NAME}"
              git fetch --all
              git pull
              echo ""
            fi
          fi

        else
          CONT="0"
        fi

        let O=$O+1
      done
    done
    cs ${CWD}
}


case $1 in
  get)
    # ########################################################################
    # Get the repo lists
    getRepoLists
    ;;

  update)
    # ########################################################################
    # Process the lists.
    cloneRefreshRepos
    ;;

  backup)
    # ########################################################################
    # Process the lists.
    cloneRefreshRepos mirror
    tar -zcf "${ORG}-Mirror-$(date -I).tgz" "${ORG}-Mirror"
    rm -fr "${ORG}-Mirror"
    ;;

  *)
    echo "Usage: "
    echo "<cmd>> get            Get the repo lists"
    echo "<cmd>> update         Update / checkout repos from BB"
    echo "<cmd>> backup         Create a mirror mackup of BB"
esac