Posted on

Some thoughts on the future concept of soft hardware..

Imaginary Concept image illustrating a concept of reconfigurable computing using current and future concepts.

I have been considering hardware solutions for many years, since around the late 80’s, designing some of them, playing around with even more.

I have worked on, and advised on research in reconfigurable computing to other research project in the same and similar areas, played simulated scenarios on virtual self-generated parallel processing units (PPU’s) that are in a way similar to cpu’s, but differ in the way that while they have a risc-like basic set of instructions, they are self-generated to transfer complex and heavy tasks into discrete hardware, as well as able to have a on the fly reconfigurable and extendable instruction set, accelerating the speed of computing from clocked sequential solutions to discrete clocked or free-flowing deterministic logic, yielding speed improvements over traditional processing, often by multiple magnitudes.

Couple this with the modern like of and demand for parallelism and multithreading, and think, what if we had a simpler PPU, where we could throw our application at something, where we could create arbitrary complex instructions, that would be automatically translated into hardware, and we hade access to thousands of [discrete hardware] threads, hundreds or even thousands of them, all running on PPU’s, that could even offer true cycle by cycle parallel processing without the penalty of the context switching in a traditional cpu?

There is a lot of talk about modern varieties of things like CPU vs GPU vs DPU vs TPU’s today, but just what if, we had a merger, where we had a RCU that would incorporate components of all of them, allowing for massively, scalable, parallel translation of software into discrete hardware solutions, by itself, on demand?

Imagine a scenario where writing a software no longer means executing static code as in the classic primary form of a set of sequential steps, but transformed into a combination of classic code and discrete deterministic logic, composing of all of the above technologies (and new upcoming ones) in combination, and to top it off, have the machine itself analyze the performance of the solution, both software and hardware, to find better and more efficient ways of doing the job, coming up with a faster solution by itself, generating new code and reconfiguring itself to be more efficient?
Welcome to the concept of the RCU. (no, not that classic Read-Copy-Update concept…)

For the future, I see a merger of all these aforementioned components into the “RCU” – a Reconfigurable Compute Unit, which is no longer sets of distinct types of computing solutions, but where different kinds of compute solutions are merged into a single unit, and elements of the different technologies are called upon and utilized by the technology itself, and it’s own behavioral and performance analysis which could in itself very well be driven by generative AI solutions, will find new ways to make it more efficient, continuously.

After all, turning software into hardware, is nothing new, and it’s not rocket science – it’s very well understood and commonly utilized concepts, but what is new, is making the hardware build itself to it’s needs to gain the performance, incrementally, by analyzing itself, not only by way of discrete logic, but new, smarter instructions, created on the fly, based on the need of the software?

Such tasks and problems, are commonly not massively compute-heavy tasks, but relatively simple tasks, just like most everyday computational tasks, ones that can be served by relatively simple and low powered solutions. What makes most task go fast, is either massive parallelism, or, where not suitable, clever solutions where you don’t have to rely on steps, but solution flows.

In the scenario of the RCU, You, as a developer, could focus on simply getting a functional solution to the problem, and let the machine take care of the solution analysis and optimization. This, could also be coupled with adaptive descriptive problem to solution generation, as we are entering the era of where this is now both technically and practically feasible.

This has been a research journey in both thought and action since around 1990, and it is still ongoing at EmberLabs.

Are you ready for it and what is coming?
Are you ready to bite?

If you want to know more and possibly collaborate, we can talk.

Posted on

GPS Location?

Need a routine to determine if a lat/long is inside or outside a specific area?

Here’s a Golang routine for this, that can be easily adopted to any other language.

/*
 * Free to use as you see fit. 
 */

package GPS

type polygon [][]float64

// Enter the lat,long coordinates (min 3)
// poly := [][]float64{ { 1,1 }, { 1,2 }, { 2,2 }, { 2,1 } ... {1,1}}
// Make sure the polygon coordinates are specified in the correct order,
// typically in a counter-clockwise direction, and that the last vertex is
// the same as the first one to close the polygon.
// in := inside(1.5, 1.5, poly) --> True
// in := inside(2.5, 1.5, poly) --> False

// Test if a GPS coordinate is inside the bounding box
func Inside(latitude float64, longitude float64, boundary polygon) bool {
    inside := false

    j := len(boundary) - 1
    for i := 0; i < len(boundary); i++ {

       xi := boundary[i][0]
       yi := boundary[i][1]

       xj := boundary[j][0]
       yj := boundary[j][1]

       // Crossing the border of the polygon?
       intersect := ((yi > latitude) != (yj > latitude)) && 
                    (longitude < (xj-xi)*(latitude-yi)/(yj-yi)+xi)

       if intersect {
          inside = !inside
       }
       j = i
    }

    return inside
}