• 2 Posts
  • 28 Comments
Joined 2 months ago
cake
Cake day: February 21st, 2026

help-circle
  • If you want to make new technology, it takes decades of development at a minimum. That means decades of funding for minuscule improvements that the vast majority of people don’t understand.

    How do you think they should get that funding? More importantly, what systems actually exist to get that funding? To the best of my knowledge there are only three real sources of funding for science: Government Grants, Industry (start-up or established), and Research Endowments. Government grants have the benefit of relying on people who sorta understand the technology, but politicians don’t like spending money and the Trump admin has become capricious with funding decisions. Industry requires you to advertise to industry people, and yeah that does come with the risk of grifters. Research Endowments are my personal favorite, but they don’t really exist take decades to build up and struggle with oversight. All three methods come with administrative parasites taking large chunks of money to pay their own cushy salaries.

    I sympathize with your concern, but these are the systems we have - at least of the time being.



  • No, I’m not really interested in online LLMs. Or you mean local HTTP? I mean, I guess I could try yoinking the existing stuff off chatGPT then just hooking it up locally. That’s worth considering since their interface is pretty good. At the very least I should inspect it if I make LibreOffice version. I don’t really want to be going back and forth forever. I want my whole process easy and integrated.

    I was actually asking more model type. Like, some models take chat-like prompts where they follow directions and some do more fill in the blank or continue from where I left off (I actually suspect this is the case). I’m not really in the field, so I don’t know the proper names but the distinction was something important to pay attention to early when the LLMs came out, so I thought maybe it was still something to keep an eye on.

    As for prompt engineering, I suppose it’ll depend on how well they can follow instructions, but I suspect the newer models are more capable than the ones I was playing with when they first came out. I’ll definitely keep the json trick in mind. I did a little nlp when the LLMs first came out, but it wasn’t reliable enough at the time to be worth the work – easier to tell it the format and strip out the junk. Maybe with modern models I can just give them the context, the sub string, the edit instructions, and the output instructions and that will be enough. Hard to say until I try it.













  • I think it’s very much a “how much data on this exist” sort of problem for most of these. Like I can pick out bacteria from fungi on an augur plate trivially, but I don’t know if there are databases of augur plates characterizing different growths with different background colors and all the diversity that real life has. Honestly, I haven’t tried this yet. It might be able to get it just fine or might be able to get it if I backlight the plate - of course, at that stage there are other programs for detecting colonies.

    The dream, for me, is to get it to understand the protein structure files and DNA sequence files then hook it up to some lab robotics and automate experiments that are mostly trivial but slightly dynamic. Maybe start with something simple like cloning then build out to other methods. Some of this stuff exists already but companies charge you a fortune and go out of business (or get bought up and discontinued) constantly, so it kinda needs to be stuff I can build and maintain myself – or FOSS.

    Even for purely computer stuff, anytime I try to get the AI to help with my proteins, it’s functionally useless because it doesn’t have a way to “see” the protein’s structure file. I can write my own scripts to help with that, but I’ll have to work on the connection between the language the AI thinks in and the actual things my code detects. Or maybe I can tell it to ask questions based on the writing then run code that analyzes the protein to give answers to specific questions… Even then, much of what I’d want help with looking at proteins is how to write analyses of points in 3D space, and while it has helped me pick the right algorithms (sometimes), I’m haven’t really been able to give it enough information to let it check that things are being implemented correctly (I think this is alignment). That might something like hooking it up to pymol (3D viewer) or it might just be a bit too dumb. It’s hard to say without trying it, and it’s a lot of work for something it’s likely to get get confused about even with the ability to “look” at the protein structure.

    I feel like, for coding, one thing I’m going to have to get it to do is stop after it makes a function or something so I can check that it’s still going where it’s supposed to go or tell it what the next function needs to do. I don’t know. Maybe I’ll start with lots of hand holding then slowly build it up until it can reliably do more or I can’t get it to be reliable enough. Maybe there’s a coding community on lemmy that’s a decent place to talk shop on how to build these scripts up and what local models are good at what?



  • Yeah, I was thinking about the code too. I think the looping output explanation makes a lot of sense and puts the “Agentic AI” into a healthier/more-realistic framework.

    I’m a lot more inclined to write my own loops than trust someone else’s AI, but with that framework I’m not sure how useful these “AI agents” will be for most non-text based problems since that’d require converting back and forth between text based mediums and whatever medium the problem is in which seems very problematic. For code I could try giving prompts to catch typos, makes tests, and improve functions. Even this seems pretty limited since usually the AI can’t see the larger picture identify the problem and plan a solution on its own. Or maybe it can in some contexts, but not the stuff I’m working on – maybe my work isn’t routine enough, Idk. I have been using it find learn algorithms and get numpy notations, but it just doesn’t grasp what math needs to be done when I try to explain my problems.

    I’ll have to think more on how to set up loops that are more generally useful and won’t require more work in making it sure it’s doing what I want it to do than work it gets done.


  • I apologize if the context/background comes off as an agenda. It’s not that I’m trying to convince people one way or another, but I am concerned the anti-AI sentiment may be causing people to dismiss useful tools. I was attempting to provide some of my thoughts including that concern as context to my more general question of what I might need to do to properly utilize the tools.

    If it helps, I agree that you shouldn’t spend any money on anything AI. To me, most “generative” AI is like a programming package. NumPy is a genuinely a really big deal, and coding without it is foolish, but people who don’t code shouldn’t worry about it. It’s not yet clear to me where AI agents fall as a tool in the world, and I’m genuinely trying to work that out. It might be useful purely as a coding tool – at the very least I think I want to try it as a coding tool. I’m also a biologist so I’m very keen to use robots to automate routine tasks – not sure if the AI will be a tool to build that automation or be a part of it.




  • Also working on some 3d maths.

    I’ve used the free versions a bit, but not really to the extent that I’d call it vibe coding. The chat bots often know where to find libraries or per-existing functions that I don’t know. It’s also okay at algorithms for well defined problems, but it often says be careful not to do something I absolutely need to do or visea versa. It’s very hit and miss on debugging. It’ll point out obvious stuff (typos) reliably, and it can do some iteration stuff usually, but it usually doesn’t pick up on other things. Once in a rare while it will impress me by suggesting I look at a particular thing, and I think it manages this better in new chats, but most complex issues fail for it. I use it as a faster stackoverflow, but you need to be able to work through the code yourself, understand what you’re doing, and test that individual steps are doing what they need to do. The bots can’t really do any sort of planning or breaking down a problem into sub-problems, and they really suck at thinking about 3d stuff.