Building an AI Tool for Enterprise - Reflections on this Journey

Jan 10, 2024

Last June, Akio Nuernberger and I met on YC Cofounder Match as I was looking for someone to explore new ideas and build a product with. He and I hit it off and quickly formulated an idea for a product: a tool that allowed technical leaders to get better visibility on their projects. We started exploring the idea space by arranging user interviews with leaders who we thought would benefit from this product. It took a few turns: we talked to product and engineering managers first, and after talking with an engineering manager from Snowflake, we were tipped off to check out how Technical Program Managers (TPMs) were getting visibility on their projects, since they work more across teams and manage multiple projects at once.

Talking & Learning

I could write a lot about how the journey progressed from here, but I’ll share with you the key moments. First, we learned about the role of the TPM. It wasn’t a role that I was personally familiar with, though Akio knew more than I did. If you're not familiar with what a TPM does, they oversee the delivery of complex technology projects; coordinating resource allocation, and communicating with technical and non-technical stakeholders to ensure that everyone is on the same page and that risks are being addressed and mitigated.

Then, after interviewing a few TPMs, we discovered that they did a lot of manual work to gather information from different sources to compile reports that were given back up to stakeholders (VPs, C-suite, and other managers). Many times, the report would need to be adjusted for different audiences, and many of the TPMs that we talked with already had technical hacks to solve this problem. Awesome! We were onto something, a real problem that we could solve with a tool! Our hypothesis was that a tool that automatically gathered data from Jira (to start) and ran it through an LLM would save a huge amount of time each week for TPMs. We also believed that focusing in the beginning on TPMs as a niche would be a good starting point, and that the product could easily serve other technical leaders in the future, as well.

We created a pipeline of LinkedIn TPM profiles to auto connect with to get user interviews and ended up talking to over 70 people in the field.

The Interviews

Following Rob Fitzpatrick's "Mom Test" strategy for interviewing users in this phase, we only asked the interviewee about their day-to-day and the pain points that surfaced. I think we did a pretty good job with the interviews, despite having a strong idea of what we wanted to build. However, I do think this narrowed our exploration of the problem space slightly, since we were motivated to validate the idea we already had, rather than press on other pain points that we might have been able to address. Of course, it's hard to do all this in a 30 minute user interview, but I would try to keep more of an open mind the next time I found myself in this phase.

After we had done quite a few interviews, we built a mockup in Figma and started showing it to the people we had initially spoken with. Some were very excited about what we were going to build, which was encouraging and helped us move forward.

Now, most of the people we had interviewed were from larger companies (500+ headcount) that required a TPM in the first place. Because our idea would involve getting read access to Jira, we would need to clear a pretty large hurdle: security in the organization. However, we believed at this point that if the problem we were solving with our tool was big enough, the motivation would be there for the TPMs to help us clear this. So, we started to build.

Finally, Time to Build (Right?)

I worked for about four weeks building a functional MVP that connected with Jira, could do analysis on projects via a JQL query, and write text status reports using that data. The backend is a Serverless Node.js app written in Typescript, and the frontend is a Next.js React app deployed on Vercel. It pulls in epics from JQL queries and analyzes them for important metrics, including:

  • Velocity
  • Recent changes to Epic scope
  • Time in status
  • Predicted end time (overdue or on-time)
  • Changes that may present a risk (assignee, story point changes, etc)

Then, the metrics would be fed into GPT-4 to generate a textual summary of what was happening in the epic, which would show up in the web dashboard:

Dashboard with Tangential Jira

This enabled quick insights into how the epic was progressing, and by computing metrics that were relevant to reporting status, we could feed these into the report generation feature.

Getting insights on epic progress wasn't the primary feature that we were aiming for. Jira does this pretty well already, even if they are missing some key features that would be useful for anyone trying to get a higher-level view of progress. The analysis was used to address the main pain point that we had in mind: generating status reports for executives and other stakeholders.

Report Generation

The report generation took all the metrics computed in each project, as well as their summaries, and used GPT-4 to write a status report, showing the epics that were at risk as well as possible actions to be taken. The idea was that it would serve as a starting point for something to be posted in Slack or sent out via email.

A Tangential generated report

Even when using our own Jira with a small number of data points, we thought it worked quite well. It could easily handle large epics and projects by breaking them into smaller chunks of work to be done asynchronously. I personally had a ton of fun building this and it came at the perfect time, since GPT-4 Turbo with the 128k context window had just come out, making it easy to generate full-fledged reports based on a lot of data. It was a proper MVP and something that we could go back to the TPMs we talked to initially, which is what we did.

Hey, We've Got Something! This is where we got our first big, concrete blow. We approached the TPMs who said they were interested, and many responded with “I’ll talk with my manager about it” or “I think it’s going to be really difficult with our security team”. We reflected more, talked with a few more people, then realized that the problem we were solving wasn’t urgent or needed badly enough. As we were realizing this, we started reaching out to TPMs in smaller companies with our MVP proposition. We were met with almost complete silence. This is when we decided that, while our tool might be useful to some, it didn’t solve a problem big enough to be a viable business.

Reflecting Deeper

Upon reflecting further on what we were doing, we realized a few more key things. First, the task that we thought we could automate, while it can be a manual and laborious one, is actually something that helps TPMs build trust and communicate with stakeholders. Gathering the information through Jira can help a bit, but much more can be gathered through 1:1s and other direct communication. Knowing when something is going to fall behind schedule is a critical part of the job, and often Jira won’t reflect accurately what is really going on with a project or team. It can be a starting point, but having that culture of trust and honest communication will get better results than data on a spreadsheet.

We learned a lot during this journey:

  • We should have started with zeroing in on a problem, not with selling a solution.
  • To think twice about using automated outreach for finding potential users. Many of the TPMs that we got responses from were larger companies, which we might have been able to discard if we thought deeper about the security requirement.
  • Validate by getting commitment to use or buy before building anything. We might have gotten better signals about commitment if we approached smaller startups right off the bat.
  • Use negative hypotheses when ideating to explore what might already be a dealbreaker (in our case: security issues & the fact that automation might not be ideal because it has second-order effects)
  • Know when to stop: doing so many user interviews felt like we were making progress but it didn’t help us make as much progress as we thought we were making. It was a kind of "fake" validation that made us feel good about our direction.

All this will come in handy moving forward, and I’m happy we got the opportunity to gain this experience.

I’m also very grateful to all the people we met along the way. We met some fantastic leaders at world-class companies, and it was really satisfying to learn how they are able to deliver incredibly complex projects effectively. If you are reading this and were one of these people, Akio and I really appreciate the time you took with us.

We've Open Sourced the MVP

We decided to share the code for the MVP we built with everyone! You can find it on GitHub:

Tangential Backend (Node.js/Typescript)

Tangential Frontend (React/Next.js)

Tangential Core - Shared Functionality (Typescript)

Cheers, and here’s looking forward to the next venture!