Great post Sancha! I've been doing a lot of the same myself. It's done wonders for my productivity.
I've found that keeping clear rules for different tasks and referencing rules inside of other rules to keep things DRY has done wonders for what Cursor can achieve unsupervised.
Lastly, when I then take the output and edit it heavily, I give it another pass through Claude Sonnet to critique the content further, helping me polish for the final result.
A great sharing. PS, I followed the link you shared on LinkedIn, link-link :)
I use quite a few different AI-chat interfaces: the official web ui or desktop UI (e.g. ChatGPT or Claude app, Grok web), some 3rd party LLM chat app via API call(5ire, OpenCat,DeepChat, etc), code editors like Zed (sorry not a Cursor user). What you describe could be a good solution for
* avoid having conversations on different app, LLM servers, LLM client. Many of the LLM chat app use a local database like SQLite to keep the conversation history and prompt libraries, hard to sync data among them. But you put them in plain files and folder and git manage them. Different files can be helped by different AI, but it's okay and now can be shared to each other
* the version control with Git is promising. I keep delete conversations and rename them to keep myself organized, but maybe just review the change and discard them or branch them
* sharing such repo with team is an interesting idea, also may be dangerous. If we do this perfectly, every day as a human being, we keep feeding data/context AI, make changes with AI, and track them with AI. If someone leaves the team, as long as such context (brain-dumping) is there, someone else can easily write new blog in that tone (maybe both blog tone and voice clone). Everyone can be easily replaced, wow, that's not just a tech problem, it's a culture shift
Again, like your post a lot. There is a theory that a person with good leadership with human can also well use the AI tools, since similar leadership skills apply, such as task breaking, requirement prioritization, understanding pros*cons for each tools/process. Thanks for sharing your practices. 💯
Thank you Jove for the super insightful reply and for reading my post!
One thing I want to try is to setup a good MCPs to Google Drive and Gmail, so that I can bring specific emails or documents as context for the conversations. I’ll do a follow-up post if I make it work :-)
And agree about what you say about AI - I’m convinced work pretty soon is going to be to work and collaborate with a set of agents; my workflow is like the roughest version of that.
I enjoyed watching Lenny’s podcast latest episode about “Devin.ai”, which is a Developer agent, talks about this.
Thanks for sharing! Loved sneaking into your workflows.
haha thanks!
Super useful stuff, I need to do more stuff in agent mode when I am back.
Thanks Ramiro!
Great post Sancha! I've been doing a lot of the same myself. It's done wonders for my productivity.
I've found that keeping clear rules for different tasks and referencing rules inside of other rules to keep things DRY has done wonders for what Cursor can achieve unsupervised.
Lastly, when I then take the output and edit it heavily, I give it another pass through Claude Sonnet to critique the content further, helping me polish for the final result.
Amazing times we're living in!
Nice post! Thanks, Jorge, for writing it, even though you probably just edited it!! 😁 Thanks for sharing 🙏
A great sharing. PS, I followed the link you shared on LinkedIn, link-link :)
I use quite a few different AI-chat interfaces: the official web ui or desktop UI (e.g. ChatGPT or Claude app, Grok web), some 3rd party LLM chat app via API call(5ire, OpenCat,DeepChat, etc), code editors like Zed (sorry not a Cursor user). What you describe could be a good solution for
* avoid having conversations on different app, LLM servers, LLM client. Many of the LLM chat app use a local database like SQLite to keep the conversation history and prompt libraries, hard to sync data among them. But you put them in plain files and folder and git manage them. Different files can be helped by different AI, but it's okay and now can be shared to each other
* the version control with Git is promising. I keep delete conversations and rename them to keep myself organized, but maybe just review the change and discard them or branch them
* sharing such repo with team is an interesting idea, also may be dangerous. If we do this perfectly, every day as a human being, we keep feeding data/context AI, make changes with AI, and track them with AI. If someone leaves the team, as long as such context (brain-dumping) is there, someone else can easily write new blog in that tone (maybe both blog tone and voice clone). Everyone can be easily replaced, wow, that's not just a tech problem, it's a culture shift
Again, like your post a lot. There is a theory that a person with good leadership with human can also well use the AI tools, since similar leadership skills apply, such as task breaking, requirement prioritization, understanding pros*cons for each tools/process. Thanks for sharing your practices. 💯
Thank you Jove for the super insightful reply and for reading my post!
One thing I want to try is to setup a good MCPs to Google Drive and Gmail, so that I can bring specific emails or documents as context for the conversations. I’ll do a follow-up post if I make it work :-)
And agree about what you say about AI - I’m convinced work pretty soon is going to be to work and collaborate with a set of agents; my workflow is like the roughest version of that.
I enjoyed watching Lenny’s podcast latest episode about “Devin.ai”, which is a Developer agent, talks about this.