Question Details

No question body available.

Tags

professionalism communication startup ai

Answers (12)

July 11, 2025 Score: 44 Rep: 951 Quality: High Completeness: 30%

In my opinion, this CTO is engaged in magical thinking. Trying to salvage a failing situation with new, over-hyped technology is an old anti-pattern in software development. AI is only the latest iteration.

You say code quality is already poor; it is not clear that purchasing AI subscriptions is going to improve it. And trying to use velocity in the Agile sense as a measure of productivity, instead of delivered features, is already a red flag. Even if "number of story points completed" increases,it is not clear that this is anything but a meaningless metric (and easily gamed by savvy developers) if the company is already failing to deliver value and features to its customers, as you suggest is happening.

In my opinion, how you respond depends on your position in the company and how likely you are to be listened to.

Possibly your position allows you to point out the pitfalls of hoping the AI silver bullet will salvage the situation are not likely to bear fruit, and suggest alternatives. The fact that you are new weighs against this option - it depends on how much professional respect this CTO has for you coming in, and only you can gauge that.

If you are not in a position to professionally push back, the other option is to move on, and if the situation is not salvageable, sooner than later, in my opinion. If you decide on this, when interviewing for your next job, I would be proactive about saying you are looking for a company that cares about code quality and can evaluate the capabilities and limitations of new technologies objectively. The kind of place you want to work at next will respect this.

July 11, 2025 Score: 21 Rep: 8,046 Quality: Medium Completeness: 0%

Give the AI coding assistant an honest try. If it slows you down, document why and how, then tell your boss.

July 12, 2025 Score: 16 Rep: 78,466 Quality: Expert Completeness: 30%

Management's AI idea is probably stupid. However, they are entitled to direct that people try whatever tool strikes their fancy, with the understanding that if the tool fails that is not the fault of the employees, and that if it only succeeds for a few they need to work to understand why and whether those results can be replicated for others.

I spent most of a year working with a tool that was particularly poorly suited to the task my second line wanted to use it for, with the outcome that we successfully demonstrated that it was a bad choice. I still got a good performance review for the work I had done to try to clearly demonstrate and overcome its limitations, and the assistance I had given others in trying to make it work.

So sometimes the right response is to register your concerns, then do your best work under the given constraints anyway.

Management is your customer. The customer is not always right. The customer is always the one holding the money. Would you rather be right, or paid?

July 11, 2025 Score: 9 Quality: Medium Completeness: 50%

"... ship out features, before a well-funded competitor does."

What's preventing a 'well-funded competitor' from buying more AI compute power and creating better, more integrated versions of those same features in less time than your ravenous Start-Up can?

A skilled leader knows when to hold, and when and where to press forward.
To win a war, not every battle need be fought, or even won.


Aside: Who's not had the experience of witnessing the impatient passengers stand up, before the aircraft has come to a complete stop, to haul their carry-on down from the overhead bins. They then stand, crowding the aisle, acting-out their impatience for all to see, waiting for their counterparts closer to the door to surge forward. I always enjoy catching-up to them at the stationary luggage conveyor in the terminal while we all await its lottery of which bags will appear in what order... Especially enjoy seeing them join the taxi queue after me on those occasions when I've beaten them in that luggage carousel lottery. They tend to be hot-and-bothered having worked themself into a state, while I've a smile having had a good time playing my lottery game by myself. We all get where we're going eventually.

Morals:
Good things come to those who [can] wait; patience is a virtue
Haste does not fill the fisherman's basket any fuller
Failing to notice and deal-with one bad apple can spoil the whole bunch; haste makes waste
Slow and steady wins the race


Step back and evaluate which end of the puppet strings you are at...

It strikes me that you will never be happy working for a company that, by your analysis here, is only in the game for the short term.

Take it from a formerly impoverished cyclist: Patches on patches is "false economy".

Ramshackle constructions cobbled together from scraps of materials will, in the long run, collapse. (It doesn't sound like there's any 'distant horizon' or 'overarching vision' in this company; just churn out trinkets to grab a few bucks from unsophisticated customers.)

If you're not happy now, it's extremely unlikely you'll ever be happy where you are currently. The one who pays the piper calls the tune. Seems like you need to select a different channel/piper.

In my long-ago experience, a 'break' in the chronology was rarely a big issue. You could choose to simply omit 4-5 weeks from your CV and go hit-the-bricks looking for a new job. If you do reveal what you've been doing at future interviews, DO NOT 'trash' the company you've left behind. If anything, perhaps simply stating the position had been 'over-hyped' to you, but revealed itself to be merely a junior's junior position with little appeal to your desire to learn, grow and develop your skills.

Read about the Luddites and ask yourself if your sympathies are with the masters of their craft or with the investors whose motivation was to use cheaper labour to increase their profits.

From one who's walked that path: Get out soon in order to cap the amount of regret you will feel the rest of your days on Earth. Vacate asap to let a copy/paste code monkey have a chance to fill that role. Your other question in this community indicates an attitude and aptitude that does not mesh with your current role. Change roles...


AI

A long-surviving codebase is likely to be made up of hundreds of thousands of lines of code. If well designed and well maintained, it embodies a 'wholistic' perspective of its purpose (not "a bit o' this, bit o' that" assembly of fragments.)

While it's still early days of AI code writers, and companies are implementing various strategies to overcome current physical limitations, it's my opinion that a capable human coder will provide, in the long run, more insightful and synergistic solutions that an AI. The capable human is driven by both pride and a knowledge or sense of a past and future directions. An AI is blinkered by its own limitations and absence of awareness.

The 'context frame' of an AI may be large and growing. The 'context frame' of a human mind is already boundless.

(Although it was only ChatGPT 3.5, I tried asking it a few different times to write code some functionality. It spewed some code that was 'reasonable' to expect from an intern. Missing was, for instance, recognition that the number line continues to-the-left of '0'... Negative values were not handled or even acknowledged until specifically prompted to do so... Other trials, likewise, came up short, but the AI was confident it had met the challenge. Bailing out a sinking boat will extend its time afloat, but is not a long-term strategy for success.)


"... improve velocity"

Heisenberg's Uncertainty Principle applies to the macroscopic world, too.

The faster you're going, the less likely you can determine where you are, how you got here, and where exactly you're headed... But, you know you're going fast-fast, with tear-filled eyes unable to see the cliff's edge ahead.

You can't crack open the keg 3 months after it was bunged and expect its contents to taste as good as 20-year-old scotch.


AI (reprise) - Two tenuous analogies:

Chess:
Admission: As a chess player, I rate myself as an 'advanced beginner', or possibly 'very junior intermediate'. I've not taken the time to learn/practice multi-ply analysis and strategies that might lead to a victory. Instead, I tend to pick the easy-to-reach fruit of a few captures, not noticing I'm walking myself into an opponent's trap.

Because its 'worldview' is, for the time being, restricted, an AI might yield code that solves an immediate problem, but that code becomes a 'legacy' that handicaps future developments.

Short term gain for long term pain; aka "short-sighted".

Scrabble:
Imagine playing Scrabble while only being capable of being aware a single, stationary, 5x7 square 'window' of the complete board (35 'tokens' out of the complete 15x15 'tokens' of a Scrabble board.) What are the chances that the ply will return the best score that would be achievable with a view of the entire board?

AlphaGo started with 30 million games played by humans, then played millions more by itself to train-up... Ask Google to estimate the cost of training AlphaGo. Then ask yourself if AlphaGo could write code in any language to implement a simple linked-list...

While your CTO dreams of sugar-plum fairies, your situation is, in my eyes, untenable. I strongly recommend finding another employer, who will still be around in a year or two, as soon as possible.


Product Suitability and Quality Control

The transition from employing and investing in the competency of human employees to somewhat technical staff capable of composing AI prompts is worrisome. Read about the children's game called 'Telephone' and consider the situation wherein one-or-more of the information processing 'stages' does not have an 'overview' or have the vocabulary or understanding of what it receives before it tries to 'forward' any information on to the next stage.

If the customer only needs off-the-rack capabilities from the product, then the customer will purchase generic off-the-rack products.

In my experience, customers often require 'custom-fit' solutions that integrate seamlessly with their existing circumstances. RAD techniques tend to anger customers who rightfully sense that their specific requirements have been ignored and will likely continue to be ignored. They are strongly motivated to 'try their luck elsewhere', taking their money with them.

July 13, 2025 Score: 9 Rep: 36,470 Quality: Medium Completeness: 20%

Other answers have covered the AI angle very well.

One thing I’d caution you about is overconfidence that security and quality matter in your situation. Yes, they do over a long term - but you might not be in a long term situation. While the CTO might be overreacting to it, small startups do need to beat their competitors enough to take flight. Will customers go elsewhere if your code quality is bad? Only indirectly. They’re much more likely to go elsewhere if your competitor’s product is available first, or costs less.

July 13, 2025 Score: 6 Rep: 7,056 Quality: Medium Completeness: 30%

The anecdotal evidence I see is that this all-AI expectation has suddenly become very widespread in the software industry. Among my experienced, close friends in the field looking for new jobs in the last year, we've all been surprised at how rapidly this became a demand in all of their job interviews (including the monitoring of AI tool usage by staff). This covers at least a dozen or more job interviews within this year.

My advice would be to separate the two issues of AI-usage versus code-quality in your head. I don't think you're going to be able to avoid the AI-usage demand no matter where you turn now. But if your company culture and processes are turning out buggy and insecure code, then document that, and feel free to suggest process improvements. If a particular AI tool is demonstrably a factor in consistently buggy code, then perhaps research best-practice or a different AI tool. But just stamping the brake on all AI tools by default is probably a non-starter at this time, no matter what your prior experiences have been.

July 12, 2025 Score: 3 Rep: 173,746 Quality: Medium Completeness: 20%

Look at the salary you get paid every month. Make an educated guess how long that salary will be paid. If you are offered share options etc. make a happy face but don’t accept them instead of your salary.

At the same time look for better / better paying jobs but don’t let anyone know. When you find a better job and signed a contract, or when the salary stops, then you move.

Seems I didn’t mention AI at all. It doesn’t matter. What matters is whether the company will succeed which I doubt. And the only benefit of AI is you can put it on your resume.

July 14, 2025 Score: 3 Rep: 9,176 Quality: Medium Completeness: 80%

Yes, GenAI is massively overhyped on the C level. The CEOs of the major players (Google, OpenAI etc.) have to do this because they are all in with ungodly investments; and the CEOs or CTOs of regular companies are often not technically savvy enough to see through the hype. Also, many people are on different phases of the hype cycle.

All this is true, and yet: even the most basic GenAI tools, i.e., a Github Copilot license in your plain old IDE of choice, is, in my experience, a moderate to massive speed-up. Even if you do not actually ever write any prompt, or use any of the "magic" bits of GenAI (where a CEO suddenly can create a web game with only a few prompts). The point is, no matter how fast you type, the GenAI auto-completion will be useful more often than not and will guess what you intended to write just fine. You can always ignore it. Or press to save 10 seconds of typing. Over a day of coding, this saves time and has very little chance to cost you any time.

And as soon as you discover how to write prompts (hint: you can simply write a comment in your code and see the GenAI's auto completion on the next line; and take that more as a suggestion than as magic code), you can simply save time because you are saving on context switches (moving over to your browser; maybe copying stuff between browser and IDE...). You do not need hours of savings for each interaction; even only a few seconds here or there sum up.

Also this helps indirectly; it is known that context switches (from actively writing code to web searches, wading through random tutorials or stackexchange answers, for example) cost time in themselves.

Eventually, you will find out how to write a prompt to help your IDE find that missing semicolon or quotation mark in two screenfuls of code, and that can easily be a few minutes of trying to find out where it went wrong, in some programming languages that are not stellar in pointing quite at the correct location.

So this is the kind of time savings you can expect from day 0. Every developer or devops engineer, or anybody working with anything resembling code or configuration can use that.

Once you're familiar with that it's very hard to go back. And then you figure out how to write prompts to get very robust, large, scaffoldings for what you intend to do - once the brain-machine connection "clicks" any current frontline GPT is very good for saving time.

So, to give your CTO the benefit of the doubt: he might just want to avoid developers refusing to work with GenAI because they are unfamiliar with it or refuse it out of moral or ethical grounds or anything like that. If the latter are issues for you (which you do not indicate in your question), then it's time to look for a different job obviously. If you have any other reason, I cannot see it. Get your Copilot key, put it in your IDE, and see what happens. The IDE will let you know what's available, you literally need to do absolutely nothing to benefit.

July 13, 2025 Score: 2 Rep: 121 Quality: Low Completeness: 30%

You have the opportunity to be on the cutting-edge of software development. Not sure which ai-helper you are being mandated to use, but I was just testing the Copilot Agent for my firm and I would say it is incredibly useful and is, in its current iteration, a game-changer if utilized by a knowledgeable developer. It's definitely not to point where it will put people out of a job, as you need to have a good understanding of any significantly-sized codebase that the agent is submitting prs for. It produced broken code (almost valid) the majority of the time, but was very good at analyzing requests made of it and producing a valid plan outline, taking into account both the code-base and request. The final execution still needs hand-holding, but this will become the standard very soon. I can't imagine going back to programming in a notepad-style text editor when I have all of the improvements the various IDEs have provided us, as well as ReSharper/ VAssist/ code-formatters/ auto-generation/ paste-json-to-class etc.

If you've only had the one professional software-dev/ (including whatever other hats you wore) job I'd recommend you jump head-first into this and expand your skills/ proficiencies. You need to demand time to become proficient at utilizing this new system and you should attempt to record methods that were beneficial to you and share them with your colleagues.

One of the places that I've found AI to excel at is organizing/ describing/ documenting existing code. If you have a code-base that is rapidly getting out-of-hand then that is one of the first places to start utilizing this tool. Devising organizational/ efficiency guidelines for you and the other developers to adopt that keep the momentum away from wild, tangled code to something that can be easily understood (or at least debugged easily) is something that can make you stand out.

July 15, 2025 Score: 0 Rep: 1 Quality: Low Completeness: 20%

You should be reviewing the AI code and rejecting if necessary - and they prob want that too. So use the AI assistant (I find it useful in my case). If they have a problem with you rejecting unacceptable code - and you can say why it's unacceptable - then that's a different conversation where you're on firmer ground.

As for the leaving after one month - if worst comes worst, stick it out for 3-6 months and look for a new job. Put startup down on your CV as a contract rather than permanent position, which will explain why you're looking so soon.

And next time you look for a position, interview them too. Nowadays I usually ask "what is the quality of the codebase like?". Unexpected, so you often get an honest answer. Sometimes the technical guy laughs out loud and says "terrible!" - I avoid those jobs. Usually the answer is "it used to be much worse, but it's getting better". I can live with that.

July 13, 2025 Score: -1 Rep: 99 Quality: Low Completeness: 50%

You should probably follow the other answers' advice of leaving this CTO in the dust.

But this won't be the last workplace where you'll encounter pressure to adopt AI. Even when the overhype wears off, AI will stick around. Use your skepticism to quickly throw away the nonsense, then keep experimenting to find the real workflow level-ups. You could even learn those lessons on this CTO's dime.

There is so much beyond feeding your Jira ticket text into the LLM:

  • How often do you use your IDE's autocomplete feature? AI autocomplete plugins takes that to a whole new level. Bonus--this quickly (and legitimately) boosts the metric your boss cares about
  • The AI you mistrust is still amazingly fast at helping you bootstrap all those reliable SDLC patterns you do trust. Your boss wants speed? Do them one better by delivering maturity.
  • At a startup, you can't rely on a mature platform team or security team to set up your dynamic code analysis that would detect XSS automatically. The AI can help you set that up
  • AI knows your coding style when you feed it properly
July 11, 2025 Score: -6 Rep: 1,672 Quality: Low Completeness: 20%

The AI business case is staff reduction and cost saving. Your CTO uses it with the assumption that AI will increase his or her workforce output. All leading sources about AI in business agree that AI will only pay off after years of use. The break even of AI costs is somewhere after three years.

It looks as if this company will have a different CTO or will go bankrupt.

I would recommend the board and investors to change the CTO. If they do not change the CTO, leave.

Chances are high they are already looking for a new CTO as the AI business case is common knowledge in the industry.