Recently I used ChatGPT for editing an email and it opened this in place editor where I could highlight a small section, a little box would open, I could tell it what i thought was wrong, and then it would just edit just that section. But I could also just edit the text myself directly. This is way better than having it re-write my whole text, having to figure out where that section went, and copy-pasting it back into my actual text. It felt a lot more like editing with a co-author, not in the “it’s like a person way” but in the it’s a focused edit way. Idk, it’s a better writing experience.
Having played with LibreOffice Extensions a bit before I’m fairly certain at least a primitive version of this could be made, but I was hoping someone might have experience with the existing Extensions. Most of them look like “write a paragraph for me” to my eye, but none have great descriptions either.
Thoughts?
Edit: Alternatively, does anyone have thoughts on the requirements on the model side of things to make this? It’s fairly trivial to feed the current text into the LLM and define the highlighted text. I suspect I could figure out how to open a window of some sort to tell it more - actually using comments would make this pretty easy in Libre Office, but I’m not sure if I know how to get the LLM to give me reliably parsable output… I could probably make track changes thing or at the worst a comment by the LLM I just don’t know if telling it to only respond with the edit would work… It’s been a while since I’ve played with all this.
Edit 2: Frustratingly the OpenAI interface has changed since I made this post and it’s currently trash. that re-writes for you rather than making suggestions. Annoying.
Edit: Alternatively, does anyone have thoughts on the requirements on the model side of things to make this?
If you’re talking to a model accessible via HTTP you can interact with it via a chat API. You track the messages sent back and forth yourself and post the whole conversation to the end-point each turn so that it has context. I’ve been experimenting with this using ollama, but I’m pretty sure you can do it with llama-server too.
I’m not sure if I know how to get the LLM to give me reliably parsable output…
Prompt engineering + sanity checks in the client side code + retries. There are options for requesting structured output specifically too. They might or might not work… I’ve had decent success getting qwen models to output JSON with prompts that include things like “Output strictly valid json. No extra text.” or similar. Sometimes it will quote things with triple backticks or triple backticks + “json” at the top of the reply – which is easily fixed by string manipulation on the client. Occasionally – especially with complicated prompts – it will go off the rails and give me junk that I’ve either needed to fix by hand or automatically retry later. It works well enough that I’ve been able to do image analysis in batches with it successfully.
I don’t know anything about the specifics of LibreOffice extensions though – never worked with those personally before.
Edit: If you don’t want to go to all the effort of writing a whole extension for it, you can also just paste the context into the model running in the terminal. Put the whole document early on in the session (quoted with backticks). Then ask for refinement by quoting snippets with the comments for revision you want to do. I haven’t done this for English language editing, but I have tried it for sanity checking my own hand written Python code. Sometimes it gives good suggestions (e.g. it caught a few typos I hadn’t noticed yet when I tried this yesterday) and sometimes it doesn’t – your own judgement is, of course, very important.
Edit 2: I’ve found it’s sometimes useful when I’m providing lots of text to include a description of what info I’m providing, then an instruction like “Say ‘OK’ to proceed.” followed by something like “The whole file is: …”. That way I can provide context into the session without it wasting a lot of time trying to deduce what I want to do with it before I’m ready in a strictly user/assistant/user/assistant/… chat conversation mode.
No, I’m not really interested in online LLMs. Or you mean local HTTP? I mean, I guess I could try yoinking the existing stuff off chatGPT then just hooking it up locally. That’s worth considering since their interface is pretty good. At the very least I should inspect it if I make LibreOffice version. I don’t really want to be going back and forth forever. I want my whole process easy and integrated.
I was actually asking more model type. Like, some models take chat-like prompts where they follow directions and some do more fill in the blank or continue from where I left off (I actually suspect this is the case). I’m not really in the field, so I don’t know the proper names but the distinction was something important to pay attention to early when the LLMs came out, so I thought maybe it was still something to keep an eye on.
As for prompt engineering, I suppose it’ll depend on how well they can follow instructions, but I suspect the newer models are more capable than the ones I was playing with when they first came out. I’ll definitely keep the json trick in mind. I did a little nlp when the LLMs first came out, but it wasn’t reliable enough at the time to be worth the work – easier to tell it the format and strip out the junk. Maybe with modern models I can just give them the context, the sub string, the edit instructions, and the output instructions and that will be enough. Hard to say until I try it.
Or you mean local HTTP?
Yes, I mean local HTTP. ollama listens on port 11434 and responds to HTTP requests by default. I’m not sure what llama-server uses by default, but like I said, I’m pretty sure you can do the same (or at least something very similar) with it.
I was actually asking more model type.
OK, I see what you mean. I’m still too new to LLMs myself to have a good answer then on that beyond saying that I know it works with qwen3.6 and gemma4 from having actually experimented with those specifically.
I did a little nlp
I mean, something like:
result = result.replace("```json", "").replace("```","")is good enough in practice for the kinds of things I’ve been doing. (I’m dealing with cases where triple backticks should never appear in the output though; you might have to get more creative if you want a result that has that kind of quoting embedded in something else…)


