Btw…
Quick view from Xmas/NY Gozo 2024!
Legacy applications…
… the regular pain in everyone’s back.
Do you have a legacy application application that needs updating or even a rewrite?
The failure of application rewrites often involves a combination of factors, and among these are the following common culprits and factors, being the primary reasons for the failure – not the language it is written in, or the conversion from one language to another or that language not being able to do the work of the other language.
Any turing complete language could do the job, literally, even such a hellish language as brainf*ck..
This shows that the language in itself, is not the problem, and it more often than not comes down to being a case of – what is the right tool to do the job?
I prefer working with Go, as it is a modern language that works equally well on almost any platform, and it is fast to develop and get working results in.
It also has the benefit of being close enough to many other languages, that devs of those, can understand it without any issues.
Also, Go, does typically not really have the inherent issue that many other languages like Python, Java or C++ suffers from with the “legacy library hell”, as it has a modern take on this, with efficient ways of keeping up to date through various mechanisms.
So what IS the big issues then?
Let’s say we start with a classic “problem child” of yesteryears – COBOL, symbolizing the language landscape of more “mature” languages, and using this as an example, as the very same core issues can be applied to pretty much any language.
It really doesn’t matter what the language is, but the underlying problems are commonly the same for all legacy applications.
Lets look at a few key points.
1. Lack of Documentation and Knowledge
2. Underestimating System Complexity
3. Scope Creep and Poor Requirements Gathering
4. Mismatch Between New and Existing Systems
5. Cultural and Organizational Resistance
6. Testing Challenges
7. Skill Gaps
8. Cost and Time Overruns
9. Failure to Preserve Legacy Business Logic
Key Point: Lack Of Documentation Amplifies All Other Problems
When documentation is lacking, every other issue is compounded:
Solution Approaches?
Why Modern Languages Like Go?
Conclusion
Rewriting an application is a significant undertaking, but with careful preparation, stakeholder alignment, and the use of modern tools and languages, it can transform outdated systems into robust, efficient platforms. By addressing risks head-on and employing best practices, organizations can successfully modernize their applications while minimizing disruption and maximizing value.
Do you have legacy applications that needs reworked, modernized or documented?
.. all while using modern tools, technologies, and keeping future maintainability and support in mind?
Let’s talk.
It’s a quick, rough and dirty edit of some footage after the improved settings of the camera.
Please forgive the movements as I am still bugging about, getting to know the drone… It’s a journey…
Next one up will be likely be goggle flight, unless I get some noice views getting in the way before that… =D
The weather for the next few days promises high winds and rain, so, if i don’t fly and post,
you know why – I won’t do unsafe flights…
We do all our flights in line with regulations.
Safety, and doing it right, first…
So, you are writing a CV?
.. and you want it to look good, be easily readable and well-received?
As someone who has read many CV’s, I know what I would like to see and what I really don’t like, and I believe this is commonly shared with many recruiters and hiring managers.
For example, the EuroPass CV format is frowned upon by almost anyone recruiting, so please stay away from these, as it is long, often unsrtuctured CV’s where you have to read multiple pages to get an idea of the skills of the person, their experience and history. Also, unless you are a graphics designer, don’t go overboard with being creative – keep it clean and easily readable, but be free to stick a personal design touch to the header or the side of the page, but keep the reading area clean.
The readability is really important, giving the recipient an easy way to assess the important information quickly, and it is also very important that the information is grouped and organized well.
This, as the recruiters has limited time to look at your CV, and you have one chance to make it through that first screening. The very job of your CV, is to get you past that first hurdle – landing you that interview.
The CV, is your personal representative in this first stage, and it has to be just as neat, clean and well-dressed as you would have to be when going for the interview.
Also, please do remember to keep your CV updated at regular intervals!
This gets us to the base rules of a good CV:
Page 1 – About half a page, which is the cover letter, containing a short summary of your strengths, highlights, character and visions.
Please note that this cover letter is not always required, and if not, exclude it from the CV. Just keep it ready and up to date for if/when it’s needed. It also serves as an example of your ability to express yoruself in free text.
Page 2 – A single page containing your contact details and personal info, skills and a summary work history and other summary details.
Page 3 and forward, is the extended work history, starting with most recent. Here, you get to explain the highlights, work and responsibilities in more detail for each job. Let the title be the work period (y-m to y-m), position and company.
What about using AI like ChatGPT, Gemini and others in CV’s?
A few words of caution is in place here.
If you DO use AI, please rewrite what was suggested in your own words, as overly hyped and polished resume language instead of naturally flowing language can be seen as a red flag.
Employers’ perspectives on using ChatGPT to assist with your resume may vary; some may appreciate that you’re embracing new technology, while others might wonder if you lack the basic skills needed to do the job, and you relying on the AI to be able to do it?
Do companies check your resume for AI?
Yes, many companies do check resumes for AI-generated content. They use Applicant Tracking Systems (ATS) to scan for specific keywords and flag generic language while hiring managers look for inconsistencies and overly polished phrases. It’s essential to review and customize your AI-assisted resume to ensure it accurately reflects your experience and skills.
Avoid the “buzzword bingo!”
While it is perfectly expected and even wanted that you name relevant skills, technologies and similar things by their proper names, please do avoid making it a “buzzword bingo” by overly including cliche’s such as: “team player” , “organizational skills”, “detail oriented”, “hard-working”, “passion for”, “results-focused”, “fast-paced movement/environment”, “quick learner” and so on.
Keep the language as factual as you can, keep it short, but do express what you did, and what you have achieved.
Buzzword cramming a CV is a good way to get it rejected, as the cv stops making sense and it all just becomes a pile of words/phrases stacked upon each other.
Having said this, the occasional use, where it is warranted and proper, is absolutely fine, especially if you can show a sample of that quick learning of a new skill that solved the issue.
Download the free [ CV-Template ] (docx format)
Feel free to use / modify as you wish!
Good luck in your job hunt!
Ouch!
Microsoft are not having a good time right now, and this is a zero-day, zero-click vulnerability and it’s pretty… serious.
I’ll let Dave explain it.
Here’s the POC exploit:
https://github.com/ynwarcs/CVE-2024-38063
The patch for this should be available in the last update from Microsoft,
so please have your machines updated!
Thoughts on AI, Security and practical day to day use.
As mentioned before, I am involved in R&D on a similar branch with “AI Hardware“, and this brings me to the AI and it’s more general use.
These days there is almost a competition going on out there about the use of AI wherever possible, regardless of whether it’s needed, practically usable, it actually serves a purpose or not, and it’s quite understandable because it’s quite hard to sell a product and be competitive without having the word AI crammed in somewhere in the sales pitch these days.
So let’s have a little bit of a pragmatic look at it.
So what is an AI?
Most of the Ai’s today are LLM’s (Large Language Models) based on software that emulates neurons and uses large masses of data for training material (internet).
There are basically three models and how you train the AI’s, but most importantly no AI’s are trained on the fly because that would effectively destroy the neural network setup while in flight as is now.
We are simply not there with dynamic AI LLM’s, just yet..
You would only retrain the models based on existing data plus any additional data gathered during training sessions, as the training is very taxing on computational and financial resources.
this in turn means that live leakage has a very low risk but the risk for future leakage is still there due to the incorporation of training material that may be gathered from questions and other supplied data.
As always, there are of course variations to the above, but it gives you a rough insight as to what it is and how it works.
A little bit of history from a developer’s perspective.
In the past when there were only books and manuals, the developers had to relate to these, and often know them by heart to actually use it. The amount of information was quite limited and it was fairly easy. Essentially everything was written from scratch.
As we know, history happened, and Internet came to be, and with it, things like Google. Open source solutions exploded, and then came the help sites to go with it and anything development.
Sites like stackexchange and many others came to be and code samples were shared between the users. Because of the perceived security risks, many developers were banned by the companies to use Internet to search for solutions, even for common problems because of the “risks” involved, as you could get “hacked” by a copy/paste, or you could leak information about your ip or other precious items.
This even in cases where it was generic information such as an error code and what caused it in regards of a specific product. Eventually, “internet” was generally accepted, and came to be part of everyday business life.
The primary risk of this was/is , as always, related to anyone who uncritically made a true full copy/paste after providing enough specific enough information for a possible hacker to write a malicious piece of response that would work in the specific environment and solution, and the user subsequently, without any consideration or review, and there being no peer review on commits, implemented this in the production code.
The exact same can be used and said for any AI, whether it’s in app, typically a developer IDE, or external like ChatGPT’s website, and many infosec teams seems to ignore the fact that you can search and look up search terms on Google to see what’s being searched, providing the exact same purported leakage mechanism, near, if not real-time, whereas this actually and typically does not apply to AI’s. Never mind questions on a website like stack exchange that it will be forever in fulltext, where the AI question will be ephemeral and not be reused as a verbatim ready made answer, despite it possibly becoming part of the training material at some point down the line, as numerical weights, and not actual full-text.
In short – an AI will not be able to recall or reproduce a specific question from another user because that’s simply not how the AI and LLM’s works. The sessions work in isolation, but the data may later be used as training, as numerical “weights” for a specific item, never as plaintext data.
Why the AI instead of “Google”?
As Google and others are mainly about static content, the AI is highly dynamic and can actually understand what you want, and quickly narrow down the answer to what you need, without all the “fluff” and having to wade through endless amounts of text and sites to get what you were looking for, and they can do this by incorporating third-party sources or doing searches on your behalf to gather the information, and this is exactly what makes the AI so useful – speed and the output limited to what you actually asked for, and this is why the AI is quickly becoming the “Google” replacement.
An example:
Compared to Google et al, If you’re stuck in a problem, you can describe the type of problem you have and get the reasoned argument with explanations specific to the problem on how to solve it, without you actually assembling and parsing the information yourself, something that can be very tedious.
AI:
(This passes my sanity inspection as a proposal for a solution…)
…. versus Google:
What about Information Security then?
A couple of ground rules when it comes to dealing with AI’s:
Again, always be the second opinion – never just copy/paste – actually look at what was presented and make your own informed decision of – does this make sense, and never assumed that the answer is 100 percent correct.
This is what any responsible developer would do, and if there was malicious intent it would be far easier to do it themselves, right there, rather than go to the AI to get it done, as the developerwould not need to explain to the AI what the environment looks like and how the specific exploit should be implemented. That would be information you already have.
The goal of the Infosec team here is to rather than just ban the users from using it, embrace it, but educate the staff about how to use it safely!
Prescribe the pragmatic safer ways on how you can interact with the AI’s, because in the end they are incredibly useful tools that will not go away, just like Internet didn’t go away and eventually was forced to be accepted despite the security teams kickings and screamings.
Trust me, it will be used no matter what anyone say, because it is just too useful not to use, and the likelihood of this happening is even higher in time and resource pressured teams, where a lot of tedious work can be simplified and done very quickly compared to the alternatives, and it is far better to have a mutual understanding of good practices, do’s and don’ts, rather than a skunkworks divisions.
Additionally, keep in mind that it is an indisputable fact that the absolute majority of security tools today use AI, be that code monitoring and validation, security tools like antivirus, api scanners and many others, inspecting code, classified document files etc regardless of security markings, on pretty much any hardware the company owns or maintains. It’s already there, and if there was a leak you would likely not know about it until way too late (specifically looking at you, MS Copilot), and such an event would be a much bigger possible threat than the occasional use of AI for a specific purposes with properly trained staff.
All these modern security tools are entirely based on AI or AI input / processing, and all will suffer the same issue of possible data leakage, one way or the other.
Let’s be very clear about something here:
Any tool that claims it will not be using the customer data, is simply marketing hype and lies, because if they did not, they would soon find themselves out of business as they would not be able to follow the evolvement of code and security threats, compared to their competitors. All the talk about “secure models” etc, is marketing fluff. Where do you think their current training material actually comes from?
Hint: They didn’t invent it…
If you deploy any of these AI security tools for wholesale scanning of the company IP, it makes absolutely no sense to at the same time unconditionally ban the use of AI’s for the developers or other creative staff, because as mentioned before, staff training on proper use is the absolute key here, and a kneejerk ban because you’re afraid of possible unknowns, is absolutely NOT the answer, as all you will achieve is to create an unsafe skunkworks project. Like it or not. It’s reality…
Takehomes for the security team:
Deal with it!
The only reasonable thing you can do at this point, is to accept “defeat”, just as you eventually had to with the emergence of the internet, and train your staff in the reasonable use, protecting the company ip and personal data, making sure that security is covered by providing working guidelines of do’s and don’ts, allowing an agreed controlled use rather than the chaotic underground skunkworks model that otherwise will emerge regardless of what you say, and over which you will have absolutely no control.
Never mind the fact that you will effectively “outlaw” most, if not all modern developer IDE’s, which… is commonly based on AI support, in part or full using their code models, relegating them back to notepad or similar “development” tools.
Trying to ban the use of AI, will be as effective as the 1920’s prohibition was… (NOT!).
Then what?
You should consider specific services (and I am not plugging anyone here) like ChatGPT’s enterprise model, where you can actually get the benefits and control security/privacy, yet, prevent any leakage and reuse for training.
If you can save an hour a day per dev, increaseing the productivity of them, this will be an easy expenditure for you to qualify the benefit of, where you gain control over what is done, how it’s done, who does it, on what basis they do it. It’s a dual win-win that will gain acceptance.
If you can’t beat them, be pragmatic and join them, making sure it’s done responsibly…
So you want to keep your golang up to date at all times?
Add this to /bin/go-update, and stick it in your crontab as a daily job, and you will always be up to date.
Rework as needed for your favourite Linux/os distro..
#!/bin/bash cd /tmp CVERSION="$(curl -s https://go.dev/VERSION?m=text | grep -o 'go[0-9.]*')" wget "https://go.dev/dl/${CVERSION}.linux-amd64.tar.gz" rm -rf /usr/local/go tar -C /usr/local -xzf "${CVERSION}.linux-amd64.tar.gz" rm "${CVERSION}.linux-amd64.tar.gz" go version
Njoy!!
For all the Linux admins out there – Add this to the header of all your crontabs.
… and it becomes a lot clearer to anyone reading them…
# * * * * * command_to_be_executed # - - - - - # | | | | | # | | | | +----- day of the week (0 - 6) (Sunday=0) # | | | +--------- month (1 - 12) # | | +------------- day of the month (1 - 31) # | +----------------- hour (0 - 23) # +--------------------- min (0 - 59) # # Asterisk (*) any value # Comma (,) value list separator (0,20,30,45) # Dash (-) range of values (8-17) # Slash (/) steps values (*/20) # # @reboot Run once, at startup # @yearly Run once a year, "0 0 1 1 *" # @annually Same as @yearly # @monthly Run once a month, "0 0 1 * *" # @weekly Run once a week, "0 0 * * 0" # @daily Run once a day, "0 0 * * *" # @hourly Run once an hour, "0 * * * *"
Njoy!