Crafting a Language Server in Golang: A VSCode LSP for Lama2
Dive into our journey at Hexmos, where we bridged Lama2 with VSCode using Golang's LSP. From challenges to insights, we're excited to share it all! π¦π.
This post narrates our journey to developing advanced functions in our VSCode extension for Lama2, a plain-text API manager built at Hexmos. We will see how our thinking and implementation evolved through three stages over a period of a year.
The Genesis of Lama2 π¦
Lama2 is a plain-text oriented API manager optimized for team collaboration via Git.
At Hexmos, we store all our APIs in a Git repository, ensuring easy collaboration and regular updates. Whenever an API is created or updated, the link to the l2 file's Git repo is mentioned as part of the corresponding PR/MR.
The below demonstration shows how an API request can be made from an .l2
file.
The below demonstration shows how an API request can be made from the extension Lama2Code which is written in Typescript.
Running an l2 file is simple:
l2 /home/lovestaco/apihub/oneLogin/login.l2
The diagram depicts the flow of an API call, the L2 binary gets the env from l2.env
. It then uses httpie-go
to make the request and outputs the results to Stdout.
The l2.env
file sits in the same folder as the l2 API file. It contains environment variables(env) like:
export FABI_PROD="http://httpbin.org/" export FABI_LOCAL="http://0.0.0.0:8000/"
Early Steps: Naive Implementation of Auto-Completion
Elevating User Experience: Auto-Completion for api.l2
Manually referring to env variables from l2.env
every time wasn't efficient. So, to make things easier, we added a feature: the extension checks l2.env
for auto-completion.
Retrieving envs became a repetitive task, necessitating an efficient solution.
The diagram illustrates the flow of an API request. The extension runs the l2 command, after which the L2 binary fetches the env from l2.env
, executes the request, and then outputs to stdout. The extension then reads it out and displays it out using webview.
This picture showcases the auto-completion of an env using Lama2Code for the first iteration.
The Stage 1 Hurdle
While the initial problem was addressed, the solution wasn't optimal because it was not reusable and was not standardized. Clearly, a more efficient approach was required.
As a remedy, we introduced an argument: l2 -e
.
It gathered all the envs, allowing editors to invoke the command and use the registerCompletionItemProvider API for auto-completing the env.
Leveling Up: The CLI Switch
We developed the l2 -e
command, this cool command can fetch envs from both l2.env
and l2config.env
. And when an env is declared in both files, we prioritized the values in l2.env
. This decision not only helped in avoiding conflicts but also improved the efficiency of our VSCode extension.
Why We Wanted a Project-Level env(l2config.env) File:
Managing many l2.env
files was getting tricky. So, we thought, "Why not have one main file for the whole project?" And that's how l2config.env
was born, right at the project's root.
Why Not Just Use the Old Approach?
Having our extension scan for l2config.env
in the same manner it did for l2.env
seemed redundant. We aspired for a more intelligent solution, prompting the new approach.
The above diagram shows how the go binary fetches the env from both l2.env
and l2config.env
files.
{ "FABI_PROD": { "src": "l2configenv", "val": "https://karmaadmin.apps.hexmos.com" }, "FABI_LOCAL": { "src": "l2env", "val": "http://127.0.0.1:8000" } }
The envs were sent to the stdout as response in the above format.
Ensuring Relevancy in Envs Search
Initially, when users engaged with our extension, they were presented with a comprehensive list of all envs. This became overwhelming and counterproductive, especially as the number of envs grew.
To tackle this issue, we fine-tuned our approach. We introduced an optimization that would fetch only the most relevant envs based on a searchQuery sent from the extension. Now, when the extension sends a specific searchQuery, the binary precisely fetches the most relevant envs using the l2 -e='searchQuery'
command, enhancing user experience by presenting only the most pertinent results.
The above picture shows the refined envs based on searchQuery and a documentation for it.
Realization: A Need for Efficiency
Despite our efforts, we hit a realization: it was inefficient. We had moved the implementation to Go, but there was a drawback. Every suggestion call would load up the binary again, making the process sluggish and less responsive.
At this point, we started contemplating the potential of the Language Server Protocol (LSP) to address this inefficiency.
Finally: Embracing the Power of LSP
The below diagram shows how extension spawns a process of Lama2 language server and makes initialize and suggest/environmentVariables request.
The "Aha!" Moment: Why LSP Became Our North Star
As our Lama2 GoBinary expanded with more arguments, we felt the need for a more efficient communication method. Initially, we considered socket-based communication for requests. However, upon discovering the Language Server Protocol (LSP) and its standard JSON RPC, we realized it could streamline the integration process for different editors, ensuring reusability and efficiency.
Behind the Scenes: Talking to LSP
-
Socket-Based Communication: At first, we leaned towards socket-based communication, where data is exchanged between processes over a network. Though it offers direct communication pathways, it comes with its complexities. Managing connections, handling dropped data packets, and ensuring synchronous data flow can be challenging.
-
Process Standard Streams (stdin/stdout): We found out that
gopls
, a top language server for Go, used standard streams for talking with LSP. This method is simpler to implement and manage faster because it doesn't need server management. Plus, it works on all operating systems, so our solution is more flexible.
The picture below showcases the implementation of auto-completion using a custom method from LSP, tailored for Lama2.
Communication between VSCode(Lama2Code) Extension and LSP of Go Binary(Lama2)
The following diagram visualizes the sequential communication between Lama2Code and Lama2 LSP Server, detailing the initialization, operation, and termination processes of the Language Server Protocol (LSP).
Starting the L2 LSP Server:
- Action: The Client spawns the L2 LSP Server and the process starts.
- Reaction: Upon receiving this request, the server invokes the StartLspServer() function.
- Key Observation: At this point, the server waits for JSON-RPC requests, mainly via standard input (stdin).
Initialization:
- Action: Post the server's startup, the Client forwards an initialize request.
- Reaction: In response, the L2 LSP Server activates its
Initialize()
function. - Key Observation: The server takes this opportunity to set up its capabilities, which are then reported back to the Client.
Requesting Environment Variables Suggestion:
- Action: The Client seeks env suggestions and sends a
suggest/environmentVariables
request. - Reaction: The server, to cater to this demand, runs its
SuggestEnvironmentVariables()
function. - Key Observation: Here, the server offers suggestions for both project and local env. It bases these suggestions on the .l2 URI that has been provided, coupled with a pertinent search string.
Shutting Down the Server:
- Action: To prepare for an orderly shutdown, the Client dispatches a
shutdown
request. - Reaction: Responding to this, the L2 LSP Server activates its
Shutdown()
function. - Key Observation: The server is now gearing up for a smooth and orderly shutdown, ensuring that all tasks are adequately wrapped up.
Exiting the Server:
- Action: Lastly, the Client sends an
exit
directive. - Reaction: The L2 LSP Server responds by invoking its
Exit()
function. - Key Observation: The server's process comes to a halt. It's important to note that if the shutdown was carried out as per the request, the server makes a graceful exit. Otherwise, an abnormal termination occurs, resulting in a non-zero exit code.
How we created a language server (LSP) in Golang
Creating a language server using the Language Server Protocol (LSP) in Golang is a step-by-step process.
Step 1: Argument Parsing
Handle the incoming arguments to determine whether the LSP server should start.
import "fmt" func ArgParsing(o *Opts, version string) { if o.Lsp { lsp.StartLspServer() // Incoming requests to the LSP will be processed by l2lsp.StartLspServer() } }
Step 2: Read Input
The next step involves reading input from the standard input (stdin
) to allow the LSP to receive request data:
func StartLspServer() { scanner := bufio.NewScanner(os.Stdin) writer := bufio.NewWriter(os.Stdout) for scanner.Scan() { handleInput(scanner.Text(), writer) } }
Step 3: Handle the Input
After reading the input, we decode and process it, then send the response back.
func handleInput(input string, writer *bufio.Writer) { var rpcRequest request.JSONRPCRequest if err := json.Unmarshal([]byte(input), &rpcRequest); err != nil { fmt.Printf("Error decoding JSON-RPC request: %v\n", err) return } rpcResponse := HandleMethod(rpcRequest) if responseData, err := json.Marshal(rpcResponse); err != nil { fmt.Printf("Error encoding JSON-RPC response: %v\n", err) } else { writer.WriteString(string(responseData) + "\n") writer.Flush() } }
Step 4: Handling Methods
This function handles the specific method requested by the LSP client:
func HandleMethod(req request.JSONRPCRequest) response.JSONRPCResponse { switch req.Method { case "initialize": fmt.Println("Initialize method called.") return methods.Initialize(req) default: fmt.Println("Default response triggered.") return response.DefaultResp(req) } }
Step 5: Run and Test
Finally, once everything was set up, we ran the LSP server binary and tested it:
Command to run the server:
mylsp --lsp
To send an initialize input:
{ "jsonrpc": "2.0", "id": 1, "method": "initialize", "params": { "processId": null, "clientInfo": { "name": "MyEditor", "version": "1.0.0" }, "rootUri": "file:///path/to/workspace" } }
Reflecting on the Journey
Software development is vast, and having the right tools matters a lot. Good tools make our work easier and faster. When we worked on Lama2, we faced many issues. But we learned from them and found better ways to do things.
We made a language server in Golang using the Language Server Protocol (LSP). This helps in
- Reusability: With Lama2 and the LSP in Golang, we created solutions that could be reused across platforms. Extensions could be birthed for multiple editors without reinventing the wheel.
- Efficiency: Designing our tools allowed for a leaner, more direct functionality, eliminating redundancies.
The best part? Lama2 is open-source. This means anyone can use it, tweak it, and share it. With the help of Git and Visual Studio Code, Lama2 is free and works great. Our journey with Lama2 serves as a reminder: when you build with the community in mind, the path often leads to innovation.
Join Our Quest!
Lama2 takes inspiration from Markdown. Think of our approach as Markdown for APIs. We warmly welcome contributors to be a part of our journey. Dive into Lama2 GitHub, share your insights, or even roll up your sleeves and code with us. Understand more about Lama2 philosophy here.