Anyone trying out the jetbrains AI co-pilot?
# random
c
Anyone trying out the jetbrains AI co-pilot?
s
not yet. The fact that it costs more than webstorm itself is a bit of an initial turn-off.
c
yeah that's a bit weird but haven't had a chance to dive in either.
n
I have tried it to write commit message. So far they were spot on.
n
I haven't looked at that, but I use TabNine which I find to be excellent. I also have the "Codeium" and "Cody" plugins installed. (I disable Cody autocomplete, it can get messy having more than a couple!) Cody is pretty good at explaining code, which can be useful to get a high level view of code you've not seen before if it's a couple hundred / thousand lines long. I also have access to TabNine chat which is in beta but admit I don't remember to make use of it very often! 🙂
Here's an example of TabNine Chat explaining a scheduled script I wrote recently:
s
I'm interested in only that very first sentence above - does your script say anywhere in code comments that it runs 'nightly' and/or 'transfers a file from the file cabinet to sFTP server'?
👀 1
n
Yeah I mean it's "good" but not that good 😄 BUT I do not describe those other blocks of code LW_ABSORB_ENV for example or explain the search.
The search is just defined no comment about what it is:
s
Bummer. IMHO having those other details are far less valuable. The 'intelligence' is in the thoughtful summary as far as I'm concerned.
n
I hear you, but when I open a 2,000 line script that I've never seen before and is poorly commented it can save time having a summary generated by AI in about 30 seconds. 🙂
💯 1
s
It's not really generating a summary though - it's generating detailed descriptions of individual functions/lines of code. The real power is a true 'summary' - unfortunately the best summary would need to include information that cannot be gleaned from the code alone (i.e. requirements in business speak)
I do think AI will eventually be able to do that
n
It's summarising functions and blocks of code, it is a summary it's just not at the level you are talking about. I feel fairly confident if I had linked up my full project and not just asked it about this script in isolation that it would have been able to give me what you are talking about or something very close. (based on how LLM's / vector stores etc are created) To be clear I'm not advocating just discussing. 🙂 TabNine Chat also has these options:
s
I've tried several of these AI tools and none of them have been able to summarize even a complete script file, let alone projects that span multiple files. If anyone has seen otherwise please speak up as I really do want that sort of capability.
n
This may be of interest https://www.codesee.io/
if you like visualisation
s
those sort of tools are helpful, but I'm looking for a business description of code (entire scripts or broader), not a technical breakdown. Therein lies the real challenge I suppose.
m
I've tried a couple of AI coding assistants, and I've found the best results come from Cursor, with codebase indexing and after adding NetSuite documentation as a knowledge source. Also it has a lot to do with the prompts you feed it.
👍 1
@stalbert ☝️
n
Is the code commented at all in a way that would indicate the script's functionality?
s
Thanks @michoel I'll take a closer look at Cursor!
@NElliott raises a good point as well - if these tools are comments-aware, how much of a factor does quality of code comments effect the quality of overall summarized output...
m
@stalbert @NElliott I'm sure it will have a significant effect. Though my approach with comments in general is they should be used to document why not what, and the script I tested on didn't have much in terms of such comments
👍 1
👍🏻 1
On semi semi-related note, I just hooked up GTP-4 to GitHub Actions to automatically review PRs. So far it's looking pretty encouraging:
s
Those are all mistakes that can be caught by Webstorm (i.e. before you even commit let alone PR). I'd love to see a tool that can somehow review PRs against our internal coding standards/best practices.
m
Yup, those are obvious issues that would already be caught by eslint, it's more the concept which I can throw some more subtle bugs at that wouldn't be caught by a linter
In terms of internal standards, it should be simple enough to update the prompt to check against that
n
Don't you have a boilerplate that has a description outlining the functionality? 🤔 Usually I include a date / author / version / brief description.
s
I've never been a big fan of 'author' in code comments - VCS has always been a far more reliable way to track such things in much more detail.
☝️ 1
m
Same, try to keep it down to a one or two line description and use the VCS for author, version, date, change log
@Shawn Talbert this is something no linting rule is going to catch
s
interesting indeed @michoel - but is it correct? or is it raising false suspicion?
m
It's a correct
s
so that line should not have changed as shown, but rather the comment is correct? Which tool was that message created from again?
@michoel is that github copilot doing code review above (via github actions it seems)?
m
@Shawn Talbert It's a GitHub action that uses the GPT-4 API
👀 1
s
has anyone put Github CoPilot through the paces to see how it compares? If not, I intend to take a closer look at that myself.