Ed Crewe Home

Sunday 22 September 2024

What is JSON transcoding?

A tool that enables use of gRPC as a single API for microservices and REST web frontend, by automated translation of your gRPC API into JSON RESTful services.


What is gRPC?

gRPC is the name of a language agnostic data transfer framework designed for cloud microservices implemented via HTTP2, created by Google, around the same time they released Kubernetes.
It is to some extent equivalent to the REST standard that was developed over HTTP1.1 and used to transfer data for web browsers. But REST uses plain text JSON, where as gRPC encodes messages as binary protobuf format data.

There are similarities between gRPC development and REST development, hence similar tools.
For scripted API interactions REST has Postman which also has some gRPC support, but there is also Kreya, grpcUI etc.

Why use gRPC rather than REST?

gRPC is 10 x faster than REST and best suited to cross microservice remote procedure calls.

gRPC uses protoc to generate any or all of Go, Python, C#, C++, Java, Node.js, PHP, Ruby, Dart, and Kotlin code out of the box - so allowing your microservice engineers to use native Go, your SREs Python and other integrators their language of choice. Each language can import the autogenerated API libraries for it, which are generated from the *.proto source files that developers create to define the API.

You have a single versioned API defined in one place for all services. This may be a dedicated myservices-proto package with all your source/*.proto files in and a script to run protoc and generate the libraries for each language your company uses. Along with the master API definition file eg. descriptor/proto.pb or proto.pbtext for the bigger human readable version.

The gRPC protocol is strongly typed and allows full validation of data in and out. It is a binary format so more compact. JSON via REST does not allow such control of the data typing and validation. It was designed as a simple serialisation format for dynamically typed HTML page scripting language, Javascript objects. Not as a performant backend RPC protocol for cloud microservices.

Because of the much looser standards around REST and JSON many people adopt the Swagger  framework to help standardize and document their REST API. Whilst gRPC has formal standards in its core protocols.

Why use JSON transcoding?


Why still use REST too?

The first question that occurs is why run a REST API at all? The reason is that whilst it uses large, slow and minimally typed messaging. It is the default standard approach for front end web development. Since web front ends are implemented in Javascript it is natural to build them against a RESTful backend that provides data in the native JSON format.

Added to this, even the latest web browsers have incomplete support for the HTTP2 protocol required by gRPC. This in turn leads to poor support in the Javascript eco-system. Also for devx accessibility, gRPC messages are not immediately human readable.

Perhaps most importantly gRPC was never designed to replace REST. It was designed for fast backend cloud deployed internal service composition.
REST is a web protocol designed for loose coupling services from across the web. Where the loose standards, typing, simplicity and transparency of REST / JSON complement HTML5. Given attempts to impose stricter typing, such as XHMTL were a failure, using gRPC as a front end replacement for REST is asking for trouble. Instead standardizing REST via Swagger, OpenAPI and the like are a more practical approach.

The front-end web world loves REST. But gRPC is far superior for more tightly coupled backend microservices. Given that, JSON transcoding is likely to remain a very useful means of saving on API proliferation and complexity by providing a single API for both, via your edge servers (ie the servers between your cloud deployed services and the internet).


How does JSON transcoding work?

JSON transcoding suits a gRPC centric single master API design. The ideal approach when designing new cloud services built from micro-services. 

It is implemented by using the transcoder plugin which can run in existing proxy servers such as Envoy Proxy The plugin can use gRPC proto source files to autogenerate a JSON REST API without any code generation required.

Alternatively there is grpc-gateway which generates the implementation from the proto files and requires a compilation step. This different implementation under the hood is not strictly transcoding. But in effect it does the same job.

Or for Microsoft language world, there is the JSON transcoder for ASP.net

In all cases you create the REST API based on the google.api.http annotations standard in your *.proto files (add the green text below) to a simple example service ...

syntax = "proto3";
package your.service.v1;
option go_package = "github.com/yourorg/yourprotos/gen/go/your/service/v1";

import "google/api/annotations.proto";

 message StringMessage {
   string value = 1;
 }

service YourService {
   rpc Echo(StringMessage) returns (StringMessage) {
      option (google.api.http) = {
          post: "/v1/example/echo"
          body: "*"
      };
   }
}

The google.api.http set of annotations are used to define how the gRPC method can be called via REST. 


Why use it?

Use a transcoder and you only need to maintain your gRPC and your public REST API in one place - the gRPC proto files - with simple one liner annotations

You no longer need to develop any REST server code. Running up web servers that provide a REST API and have to be manually coded to keep it in synch with the backend gRPC API can be dispensed with. Or if you are running direct REST interfaces from your Golang microservices these can be dropped as less type safe and more error prone. Replacing them with gRPC microservices and replacing data validation code layers in different backend code languages, with proto level data validation in one place. Validation can be your own custom validator code or you can use a full plugin such as buf.build or protoc-gen-validator

Now as you build your gRPC API you also build a JSON RESTful one too. Adding custom data validators at the gRPC level, defining your full API in one place.

The gRPC API annotations give you a performant REST API that is auto generated and run via Envoy Proxy’s fast C++ based edge server - for use by the JSON front end. Automatically transcoding messages back and forth from REST request / responses to gRPC ones.


What about Transcoding other HTTP content types that are not JSON?


It may be called a JSON transcoder but the transcoder can also transcode other content types.

To transcode other http content types, you must use the google.api.HttpBody type. Put the content in the body and set (and call the UI) with the appropriate content-type header. For example for a gRPC CSV file download, eg. getting log files ...


syntax = "proto3";
package your.service.v1;
option go_package = "github.com/yourorg/yourprotos/gen/go/your/service/v1";

import "google/api/annotations.proto";
import "google/api/httpbody.proto";

 message StringMessage {
   string value = 1;
 }

service YourService {
  rpc GetCSVFile(StringMessage) returns (google.api.HttpBody) {
    option (google.api.http) = {
        get : "/v1/example/echo.csv"
    }; 
  }
}

A Go implementation of the method to return the CSV file might be ...

import (
"embed"
"google.golang.org/genproto/googleapis/api/httpbody"

api_v1 "github.com/my-proto-pkg/generated/go/public/v1"
)
//go:embed *.csv
var EmbedFS embed.FS

// GetCSVFile to return a CSV file via gRPC as HttpBody
func (p *Provider) GetCsvFile(ctx context.Context, req api_v1.StringMessage) (*httpbody.HttpBody, error) {
csvData, err := EmbedFS.ReadFile(req.Value)
if err != nil {
return nil, err
}
return &http.HttpBody{
ContentType: "text/csv",
Data: csvData,
}, nil
}


A call to /v1/example/echo.csv?value=smalldata.csv with content-type=text/csv (or application/json) should return that file.

Other content types such as PDF can be similarly handled.


Data streaming large content types


When returning content other than compact gRPC message formats another issue arises.
What if bigdata.csv is 2 Gb in size?  gRPC upload limits are 4 Mb and although download is unlimited it is best to stream anything that may be over that size.

For large messages response streaming content needs to be used.

It is very simple to switch the protocol for gRPC and the transcoded Http REST request and / or responses. If either is prefixed with the word stream then that streaming handling is implemented both in gRPC and for the transcoded REST API. So to stream 2 Gb files from REST change the proto definition as shown in bold red ...

rpc StreamCSVFile(StringMessage) returns (stream google.api.HttpBody) {

Although this is not the only thing to be done. The main work is that the method implementation needs to stream the data too.

// StreamCsvFile to stream large CSV files via HttpBody
func (p *Provider) StreamCsvFile(req *emptypb.Empty, responseStream api_v1.MyProto_StreamCsvFileServer) error {
f, err := os.Open("/tmp/bigdata.csv")
if err != nil {
return nil
}
defer f.Close()

r := bufio.NewReader(f)
buf := make([]byte, 4*1024*1024) // Use 4 MB buffer

for {
n, err := r.Read(buf)
if n > 0 {
resp := &gapi.HttpBody{
ContentType: "text/csv",
Data: buf[:n],
}
if err := responseStream.Send(resp); err != nil {
return nil
}
}
if err == io.EOF {
break
}
if err != nil {
return nil
}
}
return nil
}

When the file is fetched it returns it to the browser as streamed Http - so in chunks.
So if you do this via Envoy and have it on debug mode you can see it being served by a series of responses in chunks via streaming http, in response to a REST get.

1. curl --header 'Content-Type: application/json' http://localhost:8080/v1/example/echo.csv?value=bigdata.csv
2. transcoder translates that request to gRPC http://grpc_host:80/myproto.api.v1.StreamCsvFile
3. The microservice returns a stream of gRPC responses with the file chunked up into them to Envoy
4. Envoy starts serving those chunks as http responses to the curl web client
5. When all the responses are done the task is complete and curl has downloaded the full file and stuck it back together.

When tested out locally on my Mac it downloaded files at over 100Mb/sec so transcoding does not appear to introduce any performance hit.

How to try out Envoy with the transcoder filter


Envoy can be downloaded and installed locally to try out JSON transcoding. You just need to be able to run up the gRPC service for it to talk to and provide the proto.pb API definition to it via the Envoy config.yaml

Envoy is easily installed on Linux or Mac

Once it is installed run up your gRPC service or port-foward it from k8s or kind.

Copy its proto.pb to a local protos/helloworld.pb directory for envoy to access it. Then run up envoy...

envoy --service-node ingress --service-cluster ingress -c envoy-config.yaml --log-level debug

A sample config for running a service is detailed in the Envoy transcoder help page 
Note that the REST path should not be added to the config with the current version 3 transcoder.
The transcoder reads the paths for REST from the proto.pb
It just needs the gRPC dot notation based URL /helloword.Greeter that indicates the gRPC service
15    - filters:
16      - name: envoy.filters.network.http_connection_manager
17        typed_config:
18          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
19          stat_prefix: grpc_json
20          codec_type: AUTO
21          route_config:
22            name: local_route
23            virtual_hosts:
24            - name: local_service
25              domains: ["*"]
26              routes:
27              - match:
28                  prefix: /helloworld.Greeter
29                route:
30                  cluster: grpc_myproto 
When port forwarding from a k8s running service something along the following lines could be used to point Envoy at it. 

clusters: - name: grpc_myproto type: STATIC connect_timeout: 5s http2_protocol_options: {} load_assignment: cluster_name: grpc_myproto endpoints: - lb_endpoints: - endpoint: address: socket_address: address: 127.0.0.1 port_value: 9090

Note that the sample config from the envoy transcoder help page is missing some useful config that you may want in production, eg.  convert_grpc_status: true, so that if gRPC errors occur, they are transcoded and returned in the body of the http response. By default (== false), the body is blank and the gRPC error is only in the header. There is a full list of config options available.

Testing


If you want to write tests against REST in Go (rather than just testing the gRPC) you will need the Go JSON annotated structs to test with, that marshal JSON back and forth. 

Since the REST API is transcoded on the fly by Envoy then these don't exist in any of your source code. To generate them you need to use command line translation tools.

gRPC proto.pb > buf.build > OpenAPIv2 at this point you have a JSON open API spec which can be used to create your REST tests in Javascript or any language with OpenAPI v2 support.
To do so in Go we can continue the pipeline with OpenAPIv2 > OAPI-codegen > generated_v1.go 
A test using httpexpect (to clean up the raw syntax of http) would be something like ...

resp := generated_v1.GetCsvResponse{}
e := httpexpect.Default(t, restSvcURL)
e.GET("/v1/example/echo.csv").Expect().Status(http.StatusOK).JSON().Decode(&resp)
assert.Equal(t, resp.Data, sampleCSV)
(this assumes our example GetCsvResponse embeds the
google.api.HttpBody as Data and includes other fields)


Sunday 9 June 2024

Software Development with Generative AI - 2024 Update


Why write an update?


I wrote a blog post on Software Development with Generative AI last year, which was questioning the approach of the current AI software authoring assistants. I believe the bigger picture holds true that to fully utilize AI to write software, will require an entirely different approach. Changing the job of a software developer in a far more radical manner and perhaps making many of today's software languages redundant.

However I also raised the issue that I found the current generative AI helpers utility questionable for seasoned developers:

"The generative AI can help students and others who are learning to code in a computer language, but can it actually improve productivity for real, full time, developers who are fluent in that language?
I think that question is currently debatable... (but it is improving rapidly) ... We may reach that point within a year or two"

Well it hasn't been a year or two, just 6 months. But I believe the addition of the Chat window to CoPilot and an improvement in the accuracy of its models has already made a significant difference. 

On balance I would now say that even a fluent programmer may get some benefits from its use. Given the speed of improvement it is likely that all commercial programming will use an AI assistant within a few years. 


To delay the inevitable and not embed it in to your work process is like King Canute commanding the sea to retreat. There are increasing numbers of alternatives available too. However as the market leader I believe it is worth going in to slightly more depth as to the current state of play with CoPilot.

Github Copilot Features

The new Chat window within your IDE gives you a context sensitive version of Copilot ChatGPT that can act as a pair programmer and code reviewer for your work. 

If you have enabled auto-complete then you instigate that usage by writing functional comments, ie prompts then tabbing out to accept the suggestions it responds with.

To override these prompts, you instead can use dot and get real code completion options (as long as your IDE is configured correctly). Since code completion has your whole codebase as context, it complements CoPilot reasonably well. But whilst the code completion is always correct, CoPilot is less so, probably more like 75% now compared to its initial release level of 50%

It takes some time to improve the quality of your prompting. An effort must be made to eradicate any nuance, assumption, implication or subtlety from your English. Precise mechanical instructions are what are required. However its language model will have learnt common usage. So if you ask it to sort out your variables it will understand that you mean replace all hardcoded values in the body of your code with a set of constants defined at the top, explain that is what it thinks you mean and give you the code that does that.

You can ask it anything about the usage of the language you are working in, how something should be coded, alternatives to that etc. So taking a pair programming approach and explaining what you are about to code and why to CoPilot chat as you go,  can be very useful. Given rubber duck programming is useful, having an intelligent duck that can answer back ... is clearly more so. 

It excels as a learning tool, largely replacing Googling and Stack Overflow with an IDE embedded search for learning new languages. But even for a language you know well, there can be details and nuances of usage you have overlooked or changes in syntactic standards with new releases you have missed.

You can also ask it to give your file a code review. Where it will list out a series of suggested refactors that it judges would improve it.

Copilot Limitations

Currently however there are many limitations, understanding them, helps you know how to use CoPilot and not turn it off in frustration at its failings! 

The most important one is that CoPilot's context is extremely limited. There is no RAG enhancement yet, no learning from your usage. It may seem to improve with usage, but that is just you getting better at using it. It does not learn about you and your coding style as you might expect, given a dumb shopping site does that as standard.

It does not create a user context for you and populate it with your codebase. It simply grabs the content of the currently edited file and the Chat prompt text and the language version for the session as a big query. The same for the auto-suggestion. But here the chat text is from the comments or doc strings on the lines preceding. 

Posting the lot to a fixed CoPilot LLM that is some months out of date. Although apparently it has weekly updates from continuous retraining. 

This total lack of context can mean the only way you can get CoPilot to suggest what you actually want is to write very detailed prompts. It is often simpler to just cut and paste example code as comments into the file - please rewrite blah like this ... paste example. Since only if its in the file or latest Chat question will it get posted to inform the response.

At the time of writing CoPilot is due to at least retain and learn from Chat window history to extend its context a little. But currently it only knows about the currently open file and latest Chat message. Other providers have tools that do load the whole code base, for example Cody, plus there are open source tools to post more of your code base to ChatGPT or to an open source LLM.

As this blog post update indicates, the whole area is evolving at an extremely rapid pace.

The model it has for a language is fixed and dated. Less so for the core language but for example you may use a newer version of the leading 3rd party Postgres library that came out 2 years ago. But the majority of users are still on the previous one since it is still maintained. Their syntax differs. Copilot may only know the syntax for the old library because that is what it was trained with, even though a later version is being imported in the file, so is in Copilot's limited context. So any chat window or code prompts it suggests will be wrong.

I have yet to find it brings up anything useful that I didn't know about the code when using the code review feature, plus the suggestions can include things that are inapplicable or already applied. But I am sure it would be more useful for learning a new language.

AI prompting and commenting issue

Good practise for software teams around code commenting are that you should NOT stick in functional comments that just explain what the next few lines do.  The team are developers and they can read the code as quickly for its base functionality. Adding lots of functional commenting makes things unclear by excessive verbosity.
It is something that is only done for teaching people how to code in example snippets. It has no place in production code.

Comments should be added to give wider context, caveats, assumptions etc. So commenting is all about explaining the Why, not the How.

Doc strings at the head of methods and packages can contain a summary of what the function does in terms of the codebase. So more functional in orientation, but as a big scale summary. So again they are a What not a How.

It looks like current AI assistants may mess that up. Since they need comments that are basically as close to pseudo code as possible. Adding information about real world issues, roadmap, wider codebase, integration with other services ... ie all the Why is likely to  confuse them and degrade the auto-complete.

Unfortunately code comments are not AI prompts for generating code and vice versa.
Which suggests that you may want to write a temporary prompt as a comment to generate the code, then replace it with a proper comment once it has served its purpose.

Or otherwise introduce a separate form of hideable prompt marked comment that make it clear what is for the AI and what is for the Human!

Alternatively use the chat window for code generation then paste it in.

Copilot Translation

Translation is an area where Copilot can be very beneficial. As a non-native English speaker you can interact with it in your own language for prompting and comments and it will handle that and translate any comments in the file to English if asked to.

Code translation is more problematic, since the whole structure of a program and common libraries can be different. But if the code is doing some very encapsulated common process. For example just maths operations, or file operations. It can extract the comments and prompts and regenerate the code into another language for you.

One can imagine that one day the only language anyone will need will be a very high level, succinct English-like language, eg. Python.
When you want to write in a verbose or low-level language. You just write the simpler prompts in a spoken language, but use Python when it is faster to communicate explicitly than spoken. Since spoken languages are so unsuited to creating machine instructions.
Press a button and Copilot turns the lot into verbose C or Java code with English comments.

Saturday 27 April 2024

Software Engineering Hiring and Firing



The jump in interest rates to the highest level in over 20 years that hit in summer 2023 for the US, UK and many other countries is still impacting the Software industry. Rates may be due to drop soon, but currently it has choked off investment, upped borrowing costs and lead to many software companies making engineers redundant to please the markets. 

For the UK the estimate is around 8% of software industry jobs made redundant. Although strangely, the overall trend in vacancies for software engineers continues to march upwards, the initial surge after the pandemic dipped last summer but has now recovered.
But if you work in the industry you are bound to have colleagues and friends who have been made redundant, if you are lucky enough to have not been impacted personally.

Given recent history, I thought it may be worth reflecting on my personal experience of the whole hiring and firing process, in the tech industry. It is a UK centric view, but the companies I have worked for in the last 8 years are US software companies.

I have been fired, hired and conducted technical interviews to hire others. Giving me a few different perspectives. 

This post is NOT about getting your first Software job

I first got a coding job in the public sector and it was as a self taught web developer in the 1990s, before web development was a thing you could get a degree in. So I initially got a  job in IT support, volunteered to act up (ie no pay increase) and built some websites that were needed, then became a full time web developer through a portfolio of work, ie sites. 

Today junior developers may have to prove themselves suitable by artificial measures. I skipped these, so I do not have any professional certifications, or any to recommend. I also don't know how to ace coding algorithm or personality profile assessments.

Once you are 5-10 years in to a software career - none of those approaches are used for hiring decisions. 

Only large companies are likely to subject you to them, and that is really out of fairness on the juniors who have to go through them, and to screen out dodgy applicants. Screening just needs to be passed, it will not have any input into whether you get the job. Hence Acing the coding interview as promoted by sites such as LeetCode is not even a thing, only passing coding exercise systems in order to start, or switch to, a career as a developer. I would recommend starting an open source project instead, to demonstrate you can actually code.

The majority of small to medium software companies and of job vacancies require experience and in effect have no vacancies for the most junior software grades with less than 3 years under their belt. So they tend not to use any of these filtering methods. They just want to see proof that you are already a developer, and usually base that on face to face interviews and examples of your code you provide them. So much like how I was originally hired back in the 1990s.

I have only been subject to a LeetCode style test once, which was for a generic job application, ie hiring for numbers of SREs of various seniority, for a FANNG.


 F I R E D 

When you get that unexpected one to one Zoom call with you manager appear in your calendar these days, it is unlikely to be great news 😓

In the majority of cases the firing process, or to be more polite, redundancy, is all about balancing the finances of the whole company or institution. As such it is very unlikely to be about you.

Of course people are also fired as individuals for various reasons, one of which is actually not being any good at their job, failing to get along with their manager, being a bad culture fit, jobs turning out not to be what was advertised, or expressing political views. Since unlike where I once worked, in the UK Education sector, where 50% of staff are union members, US software companies will have less than 1% membership, so don't tend to respond well to dissent.

Mostly this happens via failing probation, at around 15%  then maybe another 5% annually for disciplinary / performance improvement failure.

If you want to try getting individually fired then go the overemployed route. Get two or three jobs at once and test how long it takes before the company notices you giving 110% is now only 40% and fire you. The rule of thumb is the larger the company, the longer it takes!

But this post's focus isn't about individual firing, its about organizational hiring and firing.

Firing Reasons

  1. A company may be doing badly in a slow long term way, so it has to chop as part of a restructure and downsize to attempt to fix that.
  2. Alternatively the company could be doing really well. So it gets the attention of a big investment company and is bought up and merged with its rival. To fix overlap and justify the merger - both companies lose 20% of staff.
  3. Maybe it needs to pivot towards a new area (currently likely to be AI) and so chop 20% of its staff so it can hire 15% experienced, and pricey, AI developers.
  4. Or it may just have had a one off external impacting event that hit it financially. So to balance the earnings for that year and keep its share price good, it chops a bunch of staff. It will rehire next year, when it suits the balance sheet. This is the example in which I was made redundant along with 5% of staff, it was a big company, so that was a few thousand people globally.
  5. Finally it may be an industry wide phenomenon as it is with the current redundancies in the software industry. A world clamp down on easy cheap loans means investment company driven industries such as tech. are no longer awash with spare cash. Cut backs look good right now, and keep the share price high.
    Hence redundancies that are nothing to do with the industry itself or its future prospects.

That is mirrored in who is fired. Companies do not keep a log book of gold stars and black marks against each employee. They do not use organizational triggered rounds of redundancies to select individuals to fire. They certainly have not got the capability to accurately determine all the best employees and only fire the worst ones. You will be fired based on what part of the organization you are in, how much it is valued in the current strategy and how much you cost vs others who could do your job. If you are currently between teams / or in a new team or role which has yet to establish itself, when the music stops - like musical chairs, bad luck you are out.

The only personal element may be if a whole team is seen as under performing or difficult to manage it might be axed. No matter that it contains a star performer. Decisions may also be geographic. Lets axe the Greek office, save by withdrawing engineering from a country, which is again how I was made redundant, the rest of my team was in Greece.
Alternatively it may be, fire under 20 staff from each country, to avoid more burdensome regulation for bulk layoffs.

The organization could create an insecure / downturn atmosphere to encourage staff to leave. Because its a lot cheaper for people to leave than the company paying out redundancy settlements.

Redundancy keeps the average employees 😐

As a result in response to significant redundancies an organisation will tend to lose more of the best employees - since they are the most able to move, the most likely to get a big pay rise if they move and the least likely to want to stick around if they see negative organisational change.  The software industry has very high staff turnover at almost 20%. Out weighing any nominal idea of removing less efficient staff. 

If a company handles things well it may only lose a representative productivity range of staff from best to worst. But a bulk redundancy process is likely to lead to the biggest loss in the top talent, get rid of slightly more of the bottom dwellers and so result in maximising the mediocre!

In summary the answer to 'Why me?' in group redundancies is "because you were there" ... and you didn't have a personal friendship with the CEO 😉. Of course that is why new CEOs are often brought in to restructure - the first step of which is to take an axe to the current C-suite. 

Some of the best software engineers I have worked with have been made redundant at some point in their career. Group redundancies are not about you or how well you do your job. But taking it personally and challenging the messenger, with why me?, as demonstrated by recent viral videos, is an understandable emotional response to rejection, and the misguided belief that work aims to be some form of meritocracy, in the same way college might.

LIFO and FIFO

LIFO rather than FIFO is the norm in Firing. New hires are less likely to have established themselves as essential to the company, and have less personal connections within it. More importantly many countries redundancy legislation doesn't kick in until over 2 years of employment and the longer you have been employed the more the company will have to pay to terminate you.

Which means a new hire who has uprooted for their new tech job, will be the most likely to find themselves losing that job when bulk redundancies hit.
But FIFO has its place, next would be older engineers. Some companies don't even hire hands on engineers much over the age of 40, anyway. But staff near retirement have at most only a few years left to contribute and may cost more for the same grade. So encouraging early retirement can be part of the bulk redundancy process.

Prejudicial Firing

Whilst redundancy is all about costs and not about your personal performance. That is not to say companies who pass the redundancy choices down to junior managers may not end up with firing disproportionate numbers of workers who are not from the same background as their manager, ie white USA males, ideally younger than the manager. But prejudice is not personal either. That is pretty much what defines it as prejudice, a pre-judgement of people based on physical characteristics rather than their ability at the job. Also people are least likely to fire staff that they have the most in common with, resulting in prejudicial firing. 
Unfortunately it seems many companies with a good diversity policy for hiring, may not have adequate ones for firing. Again resulting in losing more of the higher performing staff.

I have heard of a case where someone got a new manager, who on joining was told to cut from his team, so he fired everyone outside the USA. The worker was so keen to stay at their current employer they went over the head of their manager to senior management and asked for their redundancy to be repealed. Since they had been at the company many years and personally knew senior management, this worked.
Alternatively a more purely cost based restructure may hire all developers from cheaper countries and fire most of them in the US. As happened with Google's Python team recently.

Fight for your job?

The company may set up a pool process for bulk redundancies if numbers are high enough per country, where you can fight for a place on the lifeboat of remaining positions. 

In both cases I would recommend that you don't waste time on a company that doesn't value you. If you do stay you risk, dealing with the bulk redundancy aftermath. Which will be present unless the redundancies were for a pivot (3) or one off event (4).
An increased workload, pay freezes, no bonus, needing to over work to justify being kept on, plus a negative work atmosphere.

In a case where I stayed after the department I was in was axed, I had to reapply for a new job which was moved to a different division. The work was less worthwhile and at the time, the employment of in-house software developers as a whole, was questioned as being unnecessary for the organisation. I outstayed my welcome for 18 months of legacy commercial software support, before getting the message and quitting.

Lesson learnt, if you must ask to to stay in your company, via senior management, a pool or reapplication. Make sure you look around and apply for other jobs outside of it at the same time.

You also miss out on a minimum of a couple of months tax free pay as a settlement.

On the other hand, if the redundancy round is for a more minor pivot, and you are happy in the role, it may be well worth staying around to see how things pan out.

Of course you may get no choice in the matter, in which case, get straight into GET HIRED mode, and start the job search. If you can manage it fast enough, you will benefit financially from the whole process. Although if the reason is (5) a sector wide reduction, then it will be take longer and be harder to obtain the usual 20% pay increase that a new position can offer.


 H I R E D 

Why change jobs (aside from being fired!)

  • It is a lot easier to get a pay rise or promotion by changing companies, than being promoted internally. To fast track your career to a principal or architect top IC role. Or just get a pay rise.
  • Changing jobs gives you much wider experience, of different technology, approaches and cultures. Making you a better engineer.
  • If you have been in your current job over 10 years without significant internal promotions or changes of role then it is detrimental to your CV, indicating you are stuck in a rut and unable to handle change, eg. new technology.
  • You want to shift sectors.
    I changed from public sector web developer, to commercial cloud engineer with one move.
  • You want to get into new technology that is not used in your current role.
    I changed from a Python, Ruby config management automation engineer to a Kubernetes Golang engineer with another.
  • You want to change your role in tech, or leave it entirely. For example get out of sales as a solution architect and back into a more technical role as an SRE.
On that basis many software engineers change jobs every 2 or 3 years for part of their careers. Its expected, the average engineer in a FAANG stays less than 3 years.

Of course you probably need to be in a job for at least 2 years to fully master it.
If your CV has loads of similar positions where you barely make it past the probation period, its marking you out as a failure at those roles == Fail hiring at the first step, the HR CV check.

Upskilling

The other problem is that changing jobs to change roles, even if its just to use a new language or framework can be blocked by roles requiring experience in that area on the CV to get interviewed for the job in the first place. For software engineering that is less of an issue. Since tech changes faster than any other sector.
You just need to prove you have a range of experience and software languages and are willing to learn, early in a technology boom. To catch the cloud engineer bus, I got a job in it in 2016. The US cloud sector was $8 billion back then. It is $600 billion now. Similarly to get on board with Golang and Kubernetes in 2019. In the first few years of a tech boom most companies will initially have to cross train engineers without direct experience. The corollary of that is that in the current downturn attempting to pivot to an established technology, which k8s has become, is going to be much harder.

Market rates

Clearly ML ops and AI data science are current booming areas. The demand so far outstrips current supply that for switching to a more junior Python AI role in them may pay as well as a senior Django web developer for example.

So around £60k for a junior role, but in 3-4 years it should jump to at least £100k for a senior AI engineer. Of course for US salaries add 30%, plus usually free medical, life insurance etc. The lower tax rates cancels out the higher cost of living in the US ... so its UK salary +30% in real terms*. Researching the going rate for the particular role, technical skills and sector you are applying for is a necessary part of the hiring process. In order that you don't let recruitment bargain you down too low.

*
Note that geographic software pay differences are why you often come across engineers of other nationalities emigrating to, and working in the higher paying countries, USA, Canada and Australia. I have worked with many people from the UK and Europe who live in the USA, and Indians who live there or Europe, for example.
Of course as a cheap foreign worker myself, I too stick with US companies partly because they pay rather more than UK ones, even if a lot lower than what I would get if I moved there 😉

Now is the time when such a switch will be easier to accomplish without having to work nights doing courses, certifications and personal projects. The usual means of demonstrating your ability without any work experience.

The caveat here is that moving jobs in a down turn, as we are arguably experiencing currently, can depress the market salary rates and if you are already at the top of those when made redundant, can mean you have to take a pay cut for a year or two rather than face the cost of long term unemployment.

The hiring process for an experienced software engineer role
Interview to Offer should take a month.

If not then the recruitment is likely for a group of roles in an expansion process and from screening and CV, you are not one of the top candidates. You may waiting on the backlog of potential interviewees for a couple of extra months before it properly kicks off.
Or you are told the post is no longer available, sorry!
Even if you would eventually get a post it may stretch your redundancy settlement. Therefore I would not bother pursuing any application process that is looking to be stretching on past 6 weeks.
Start date will be 5 weeks from contract (partly to cater for notice, referee and compliance checks etc)

That makes it 2 months minimum from applying for a role to starting.


The process will consist of a technical assessment task and at least 3 interviews, screening, manager and technical.
With another for introduction to team mates / office etc. which is unlikely to have any effect on the hiring decision unless you and your potential new manager take an instant personal dislike to each other.

The HR screening interview, just checks you are a genuine candidate for the job.
The Manager interview similarly is more about checking you will fit in with the company and team, plus that you have basic personal communication skills.

The Technical Interview is what matters

Passing the technical interview is what really decides whether you will get a job offer. Sometimes the tech interview may be split into two, one more task and questionnaire based and the other more discussion. Often the initial task part will be given as WFH.

The technical interview will consist of technical questions to explore whether you have the knowledge and experience required, plus some thing to confirm you can write code and discuss that code, for a developer or SRE role. For the former it would likely be application code whilst for the latter automation code.
For a more purely system administration / IT support role it will involve specifying your processes for resolving issues.

If you are unlucky and it is an in person interview, you may have to whiteboard pseudo code live in response to a changing task described to you on the spot. Although I have only had that once. More common, especially for hybrid / remote roles, is the take away task. To be completed in a 'few hours' at most.

It is possible that either of the above could be replaced by another source for your code. Talking through one of your open source packages, if you have any. Or talking through one or two longer automated coding exercise assessed tasks. I have never come across either of these though.

The main point is that the core of any technical interview for a developer related role will involve talking through code you have written, as a kicking off point to check your understanding of the code, how it could be improved, how you would tackle scaling, or a new exemplar functional requirement. Its faults and features.

You will be asked to talk through past code or technical work in a more generic manner in response to standard questions along the lines of examples of your past work that show how you fit the job. 

Preparing for the Technical Interview

It doesn't take much to work out that a 20% pay rise is worth, a day's worth of work a week.
Assuming you stay in your new job for 2 or 3 years - that is equivalent to 6 months pay.
On that basis even doing a week of work to apply to, and prepare for a single job is still very well worth it, if you get the job.

Adopting a scatter gun approach, ie applying with a generic CV and covering letter to 10 or more jobs, is a waste of time in my view. If you need a new job, then it should be one you are genuinely interested in and research. That means probably you should only have a maximum of 3 tailored applications on the go at once. Even when I was made redundant (and about to get married) I think I limited myself to 4 job applications in total, with one primary one that thankfully I did end up getting.

There are many sites that can advise how best to do that, based on the framework that your hiring will be decided upon I have outlined above. I think preparing some Challenge Action Result stories targeted at the details of the new employer is useful. Plus spending a day or so refining that '2 hours'  development task. Researching the company and preparing specific questions and perhaps suggestions for your interviewers.

Being a Technical Interviewer

From the other side of the table, clearly candidates need to show sufficient competency for the post. They may show it, but only within a totally different technical stack. Smaller companies tend to have less capacity and time to get people up to speed with new tech. So will likely fail these candidates even though they are capable of doing the job eventually.
The technical interviewers will tend to pair on the assessment - to improve its consistency. Swapping partners for interviews regularly also helps.

The assessment process is likely to use some online system such as Jobvite or Greenhouse where each interviewer assesses the candidate. Finally summarising it all with a recommendation for strong pass, pass or fail. Sometimes for a specific post and grade, otherwise the assessment can include a grade recommendation. The manager then rubber stamps that assuming appropriate funding is available. HR's job is to beat the candidate down to the lowest reasonable price, without going so low the candidate walks away.

A healthy growing company will tend to have a rolling recruitment process as they expect to be increasing head count in proportion to customers and revenue. On that basis they will likely be aiming to recruit anyone with a good pass, plus maybe most of the passes too.

Given that engineering jobs are highly specialised and require relevant experience I have not seen cases of way more interviewees than jobs. Currently, even with all the redundancies, there is still an under supply of engineers.
Also the approach for HR will be to set experience and skills pre-requisites for the roles that will keep the numbers down for those who make it to technical interview to around double the number of vacancies. Since it takes out a day of work for each interviewing engineer, to prep, interview and assess.

HIRING SUMMARY

You must pass each of the first 5 or 6 steps to get to the next one and get the job.

  1. HR check written application is a plausible candidate
  2. FAANG sized company - automated quiz / Leetcode style challenge - to reduce the numbers - because they get way more speculative applicants.
  3. Recruiter chat to check candidate is genuine and available
  4. CV skills / experience check vs other applicants to shortlist those worth interviewing
  5. Technical task could be takeaway or whiteboard / questionairre interview
  6. Technical interview, in person or Zoom with engineers

  7. Manager interview / introduction to team mates. 
  8. Recruiter chat. Negotiate exact salary. Agree start date.
  9. Contract is signed, YOU ARE HIRED.