Daniel Kravets, Technical Lead, Vendict

January 22, 2026
January 22, 2026 Terkel

This interview is with Daniel Kravets, Technical Lead at Vendict.

Daniel Kravets, Technical Lead, Vendict

Can you introduce yourself and tell us about your current role in the tech industry, particularly in relation to startups, AI, and backend development?

My name is Daniel Kravets, I have more than ten years in development, and for the last four years I have worked as a tech lead at Vendict, an AI-native GRC automation platform that streamlines security questionnaires, audits, and compliance workflows. We build a trusted knowledge base from a company’s policies and processes, generate accurate responses without hallucinations, and provide full traceability. My area of responsibility is architecture, infrastructure, and how AI becomes a part of the daily development, from specifications to code generation and tests. We focus on spec-driven development, it is a live specification that both people and agents use, and it helps us keep the speed without losing control.

What inspired you to pursue a career in tech, and how did you navigate your way to your current position working with startups and AI?

I was always interested in system engineering, how you can take many small parts and build something that works and stays stable. I started as a backend developer, then I worked with architecture, but the turning point was 2022 and the release of GitHub Copilot. It became clear that code can be written in a new way, through description and not through lines. In 2022, I moved to an AI-oriented startup, not because I wanted to work with artificial intelligence, but because I saw a chance to change the engineering process itself, to make it faster but also more clear.

You’ve mentioned ‘vibe coding’ in your experience. Can you explain what this concept means and how it has influenced your approach to software development, especially in startup environments?

The idea of “vibe coding” is that you start to trust the solution on the level of intention and not on the level of single lines. For me, it is not blind trust in AI, it is working with a partner who needs clear boundaries. I always set guardrails, like tests, specifications, and review rules, and then it becomes not “AI wrote it for me”, but “we solved the task together, but the responsibility is still on me”. In a startup, this approach is especially important because it helps not to get stuck on micro details and move faster.

In your experience leading a GenAI startup, how did you balance the need for rapid development with maintaining code quality? Can you share a specific challenge you faced and how you overcame it?

We do not avoid compromises, we manage them. In early stages, quality is not the absence of bugs, it is the predictability of how the system behaves. There was one case, we were adding a new system for answering GRC questions, and the AI model sometimes gave unexpected phrases. Instead of rewriting the architecture, we invested in monitoring and alerts, and when something went wrong, the system told us. This way we kept the speed and did not lose control. The key is not perfection, but real-time feedback.

You’ve talked about spec-driven development as an exciting trend. How have you implemented this in your work, and what tangible benefits have you seen from this approach in AI-driven projects?

We formalized this process through Cursor, and our specification lives as a set of rules. Some of them are global, like “always apply”, some are contextual, like “apply intelligently”, and some are connected to the file name. After every iteration the agent suggests updates to the documentation, and we save new patterns, data models, and edge cases. The effect is less manual fixes, clearer reviews, and easier onboarding. New engineers don’t see a dry document, but a live map of the project.

As AI continues to evolve, what do you see as the most significant challenges for backend developers in startups? Can you provide an example from your own experience?

The main challenge is not in syntax, but in the architecture of trust. When some part of the code is written by AI, you need to build a system where everything is checked automatically, like schemas, tests, and access limits. For example, one time the agent decided to “optimize” the API and changed the contract without telling anyone. After that, we added a schema validation layer, and now no change can go through without a check. The backend becomes not only a set of services, but an ecosystem of rules and validations.

You’ve mentioned the importance of treating AI output as untrusted. Can you walk us through your process for reviewing and testing AI-generated code, particularly in a fast-paced startup environment?

The process is similar to CI CD, only we add a step for checking the AI logic.

1) We write the specification for the human and for the agent.

2) We generate the code.

3) We run the tests, and if the AI moved away from the spec, the test fails.

4) We analyze the deviations, and we either improve the spec or adjust the prompt.

It is like a constant cycle of: describe, check, refine. The main thing is not to confuse automation with autonomy.

How has the integration of AI tools changed your hiring and team-building strategies? What skills do you now prioritize when building a tech team for an AI-focused startup?

We started to look not for one more programmer, but for people who think in systems. It is important not only to write code, but also to know how to describe tasks, to explain the problem clearly, to define success criteria, and to check the result. I pay attention to curiosity, critical thinking, and the ability to test hypotheses. AI takes the routine, and the human needs to guide it, and the one who knows how to guide becomes impossible to replace.

Looking ahead, what advice would you give to aspiring tech entrepreneurs who want to leverage AI in their startups, particularly from a backend perspective?

AI is an accelerator, but if the process is chaotic, it will only accelerate the chaos. First define what you want to automate, what the success criteria are, and how you will check the results. And the most important thing is to always check the AI like an external contractor, with tests, logging, and monitoring. This way you keep control and avoid the illusion of smart magic.

Thanks for sharing your knowledge and expertise. Is there anything else you’d like to add?

Yes! This is a great time for experiments. AI does not replace engineers, it needs new engineers, the ones who know how to ask questions and not only write answers. Do not be afraid to try, but build the system so that every acceleration stays under control. And remember, perfect code does not exist, but meaningful code does.