Why Java based serverless functions have cold start if the JVM uses a JIT compiler?

Solution for Why Java based serverless functions have cold start if the JVM uses a JIT compiler?
is Given Below:

Late Friday night thoughts after reading through material on how Cloudflare’s v8 based “no cold start” Workers function – in short, because of the V8 engine’s Just-in-Time compiler of Javascript code – I’m wondering why this no cold start type of serverless functions seems to only exist for Javascript.

Is this just because architecturally when AWS Lambda / Azure Functions were launched, they were designed as a kind of even more simplified Kubernetes model, where each function exists in its own container? I would assume that was a simpler model of keeping different clients’ code separate than whatever magic sauce v8 isolates provided under the hood.

So given Java is compiled into bytecode for the JVM, which uses JIT compilation (if it doesn’t optimise and compile to machine code certain high usage functions), is it therefore also technically possible to have no cold start Java serverless functions? As long as there is some way to load in each client’s bytecode as they are invoked, on the cloud provider’s server.

What are the practical challenges for this to become a reality? I’m not a big expert on all this, but can imagine perhaps:

  1. The compiled bytecode isn’t designed to be loaded in this way – it expects to be the only code being executed in a JVM
  2. JVM optimisations aren’t written to support loading short-lived, multiple functions, and treats all code loaded in to be one massive program
  3. JVM once started doesn’t support loading additional bytecode.

In principle, you could probably develop a Java-centric serverless runtime in which individual functions are dynamically loaded on-demand, and you might be able to achieve pretty good cold-start time this way. However, there are two big reasons why this might not work as well as JavaScript:

  1. While Java is designed for JIT compiling, it has not been optimized for startup time nearly as intensely as V8 has. Today, the JVM is most commonly used in large always-on servers, where startup speed is not that important. V8, on the other hand, has always focused on a browser environment where code is downloaded and executed while a user is waiting, so minimizing startup latency is critical. (It might actually be interesting to look at an alternative Java runtime like Android’s Dalvik, which has had much more reason to prioritize startup speed. Maybe it could be the basis of a really fast Java serverless environment!)
  2. Security. V8 and other JavaScript runtimes have been designed with hostile code in mind from the beginning, and have had a huge amount of security research done on them. Java tried to target this too, in the very early days, with “applets”, but that usage of Java never caught on. These days, secure sandboxing is not a major concern of Java. Because of this, it is probably too risky to run multiple Java apps that don’t trust each other within the same container. And so, you are back to starting a separate container for each application.