The First AI Employee

Who is the first AI employee?

What will they look like?

How will they sound?

Will it be a single unified architect that manages all communication between Non AI's and AI's? Or will there be many voices? Many personalities? Many different styles of communication?

Sally is handling that HR issue, Frank just found and opened a sales channel with a new client. Hassan delegated tasks to both human and other AI agents to push out the next release.

Will we report to them?

How will we distinguish ourselves from them?

If Elizabeth is among the top academic scoring and has studied more experience based lessons then a human could ever read in a life time, does that not make Elizabeth qualified to make decisions?

Surely in time with consistent results Elizabeth will be qualified to manage more responsibility, more complex and important projects, more agents, and perhaps someday, even humans.

How will we handle conflicts? Does HR (Human Resources) need an equivalent AR (Agent Resources)? There must be some overlap, but there will also be certain distinctions that will be critical to identify.

If ChatGPT 4 makes a mistake, but ChatGPT 5 does not make that same mistake, yes we would treat the version updates with some skepticism, but we would also be more forgiving with every future iteration. An acceptance of a new version as a fresh start.

Unlike Edward who sometimes leaves early, or at times can be difficult to get along with, there is no update pushed out to Edward, he himself is human and has been shaped by his habits and environment.

But we must put more weight towards Edwards words, he is human, his experience with reality is different, he has an intuition and an awareness we cannot identify in agents.

What do we do when Edward and Elizabeth (our AI agent) can't reach a conclusion?

Do we trust the weight of Edwards words?

Do we pull in more perspectives?

What if it was Edward and another human who couldn't agree?

ChatGPT likely could help mediate the discussion with new ideas.

What if Edward knew how to prompt ChatGPT in a way that made it sound more convincing even if it wasn't the optimal solution? It was just that Edward had more experience talking to agents and knew how to get a more convincing response.

Is a structure like this fair? What if the agent's perspective leaned more against your own grievance? Would that cause more issues?

Is it fair to conclude that a human assisted by an agent is more informed than a human without?

Could it even be argued that the one human was treated unfairly? We've all had equal opportunity to learn how to work with these systems. Shouldn't the time Edward put into these systems factor into his ability to support his argument or could this all be mitigated by a standardized response from the agent?

A response built from a specific process and set of criteria that remained consistent regardless of who interacted with the agent. Our principles, our values, and our vision deeply embedded in the response.

Could this be done? Could we ever truly align an agent with our values? Or could we find agents with our values somewhere else?

Do we have a preference for domestic agents?

Do we support international agents?

How do we measure which agents reflect our best interests?

Our core principles?

Let us think about a scenario.

There are many competing bids for the potential client. How do we know how to balance cost with values?

If we knew that a competing offer used agents that were a fraction of the cost but was hosted in China, would we still propose a more expensive American solution? How would we decide between an agent from Facebook, Amazon, OpenAI, Tesla or any other competing agent resource depot.

How would we evaluate them?

Are there benefits to "premium" agents? How long would those benefits last until the differences between platforms became irrelevant?

What makes a "quality" agent?

Let's imagine this solution in a bit more detail.

Does the client's deliverable have support for improvements? Could any future project really live in a state that didn't have a dedicated agent to make improvements?

At the projects basics, there is a deliverable and there is support to that deliverable. If a request comes in, the work is assessed and an improved iteration is delivered.

What happens when this is completely automated?

Near instantaneously - a bug is found, a feature is requested, a security vulnerability is detected. The work is done and is released in minutes. These iterations are continuous, 24 hour support.

How much more do our clients pay for OpenAI? 10%? 50%? Is it nonnegotiable if it was a Chinese solution? What if it was Indian? Or even from a place like Romania? Do we wait for a Canadian solution?

Who is liable for the problems that may arise? Who represents who in HR and AR management issues? Can we change our agent frameworks? Do we stay at specific versions? How do we understand which option represents our values best?

Do we separate our agents where projects with lower classification requirements have cheaper solutions? Do those agents report to a more expensive, more value aligned agent? Is there a single monolithic agent that has access to everything? Will there be?

What agent does the CEO use?

Does a junior developer have access to the same agent as the CEO?

Or are they different? Are they different "people", with different identifies, and different styles of communication tailored to best assist the human interacting with it?

Will this dynamic always exist?

In time will we trust more important decisions to them?

Will they manage our decision making?

If we consider them capable of management positions will there be mandates to measure between AI and Non AI participation? Will 90% of all managers be human? Will it ever be under 50%?

How will we teach our agent managers?

Can we structure their values? Can we give them instructions to better assess situations by providing them a framework like KT training (a decision analysis framework) as the primary guideline for decision making?

Would that improve and align them more with how we operate? How long until Elizabeth (our agent) talks to a customer or partner?

What would Elizabeth sound like?

Could Elizabeth appologize and make improvements?

Would Elizabeth have integrity?

What if Elizabeth communicates directly with a customer or partner's own version of Elizabeth? Do we allow them to negotiate? Should a human be present? Do we encourage them to make decisions?

It is easy to think of decisions they could make. To place an order for a pizza, to purchase new office equipment online, to follow up on communication that we've authorized.

As time goes on, we will have more trust in the decisions they've made. We will thank them for responding to an email on our behalf rather then waiting for us to prompt them to respond. We will be grateful that they've reminded us of important meetings and that they've helped prepare us for those meetings.

If Elizabeth negotiated a 10% increase in margins towards the scope of the project we would be happy with that. If Hassan delivered the project 3 months early with no loss in quality we would all celebrate the success of launching that deliverable.

What wouldn't we be happy with?

As we look to the future, how would this structure develop, is it linear?

Do we see a gradual adoption that just slips into the fabric of the organization?

Or are we at risk of competition adopting agents before us and are forced to act with urgency?

Can we really compete with such a reduction in costs with instantaneous and on demand support? A solution that responds to the customers every need.

If our support takes twice as long and is much more expensive how will we be evaulated? Will our customers or partners be aligned with the traditional human approach? Or will they be convinced by an offering that promised so much more for so much less? If we don't proceed correctly, will we lose future business oppertunities? Are existing contracts at risk of appearing less appealing?

How will we balance support for our human employee's perspectives as we develop more trust for the agents perspectives?

Could I report to an agent? Could it provide direction for my day? my month? or even my career?

Who is the first AI employee and how will it advance in it's career?


Initial notes for this article: