Skip to main content

The Job Characteristics Model: A Guide for IT Leaders

· 19 min read

featured-image

A lot of IT teams look productive on paper and flat in real life.

The dashboards are busy. Tickets move. Pull requests get merged. Endpoint data shows constant application switching, long active hours, and solid output. Then you sit in stand-up and hear the same dry updates in the same dead tone. One engineer is carrying too much maintenance work. Another has become the person who fixes broken pipelines but never builds anything new. A third is shipping plenty, but clearly doesn't care about the work any more.

That gap matters. Activity is easy to count. Motivation isn't.

I’ve seen teams where the problem wasn’t laziness, weak management, or bad tooling. The problem was the design of the work itself. People were doing tasks, not owning outcomes. They were visible, but not connected. They had workload, but not much reason to feel proud of it. If you’re already tracking engagement through systems such as Microsoft 365 HR engagement, you’ll know this gap shows up in more than one place. The pattern usually appears first in day-to-day behaviour.

You can also spot it in the messier signals. More tab-hopping. More half-finished work. More mental drag. If that sounds familiar, the write-up on brain fog in the workplace is worth a read because low-quality job design often feels like cognitive overload before it shows up as attrition.

The hard question is simple. How do you improve the quality of the job, not just the volume of output?

When Your Team Is Busy But Not Engaged

In most infrastructure and engineering groups, disengagement doesn't arrive as a dramatic event. It turns up as a slow drop in care.

The Kubernetes admin who used to suggest cleaner fixes now just applies the quickest patch. The developer who once pushed for better release notes now says, "It works, ship it." Your service desk lead closes a huge number of tickets but has no say in how recurring issues get prevented. People aren't idle. They're detached.

That usually means the work has become too fragmented, too controlled, or too opaque.

What the day looks like when the job is wrong

A few patterns come up again and again:

  • Constant partial ownership: Someone touches twenty things and finishes none of them end to end.
  • Low discretion: The team has responsibility for delivery but almost no room to choose method, timing, or tools.
  • Weak feedback loops: People do work, then wait for somebody else to tell them whether it mattered.
  • Invisible impact: Engineers maintain systems that keep the business running, yet nobody shows them the downstream effect.

Busy teams can still be badly designed teams.

Managers often respond by adding incentives, introducing a new process, or tightening oversight. That can raise output for a while. It rarely fixes the underlying issue. If the role itself is thin on ownership, choice, and clear result, people feel that pretty quickly.

The management mistake

Many leaders over-read output data and under-read work quality.

If a person is active all day across Jira, Visual Studio Code, Teams, the browser, and a terminal, it can look like healthy contribution. In practice, it may mean the role has become a series of interruptions stitched together by obligation. That kind of busyness drains even strong people.

The better approach is to inspect the shape of the job. That's where the job characteristics model earns its keep.

The Job Characteristics Model Architecture

Hackman and Oldham developed the job characteristics model in 1975. It remains useful because it doesn't treat motivation as a mystery. It treats it as a design problem with identifiable inputs and visible effects, as described in this overview of the model and MPS formula.

A diagram illustrating the Job Characteristics Model, showing core job inputs, psychological states, and expected work outcomes.

It is a service architecture.

The job has inputs. Those inputs create internal states. Those states affect outcomes. If the inputs are poor, you don't get better results by demanding more effort from the user.

The inputs

The model starts with five job dimensions:

  • Skill variety
  • Task identity
  • Task significance
  • Autonomy
  • Feedback

These aren't personality traits. They are features of the role. That matters because you can redesign them.

A platform engineer's job can be structured as repetitive patching with strict procedures and delayed feedback. The same role can also be structured as ownership of reliability improvements, direct access to service outcomes, and discretion over implementation. Same title. Very different job.

The psychological engine

The five dimensions work through three psychological states. This explanation of job characteristic theory captures the core logic well: experienced meaningfulness comes from skill variety, task identity, and task significance; experienced responsibility comes from autonomy; knowledge of results comes from feedback.

That chain is what makes the model useful in practice.

If work feels fragmented, meaningless, and externally controlled, motivation drops even when pay, process, and tooling are decent. If someone can see the whole piece of work, knows it matters, has room to make decisions, and gets clear signals about results, the same person usually shows more care.

You can't manage your way around a badly designed job for very long.

The outcomes managers actually care about

The model links those psychological states to outcomes such as internal motivation, work quality, satisfaction, and lower absence or turnover. For an IT leader, the point isn't academic neatness. The point is diagnosis.

When a team looks flat, ask:

Part of the systemWhat to inspect in a tech team
Job dimensionsIs the role repetitive, fragmented, over-controlled, or unclear?
Psychological statesDoes the person feel their work matters, belongs to them, and produces visible results?
OutcomesAre you seeing apathy, shallow execution, avoidable churn, or passive compliance?

A lot of managers jump straight to the last row. The model tells you to start with the first.

The Five Core Job Dimensions Explained

Most descriptions of the job characteristics model stay too abstract. In technical teams, the five dimensions are easier to understand when you compare a weak role design with a stronger one.

A quick reference

DimensionInfluences...IT Team Example
Skill VarietyWhether the work feels repetitive or mentally aliveA sysadmin who only resets failed services versus one who also scripts automation, documents patterns, and supports rollout planning
Task IdentityWhether someone can see a complete piece of workA developer who only fixes isolated bugs versus one who owns a feature through build, test, release, and follow-up
Task SignificanceWhether the job feels useful to other peopleA security analyst processing alerts without context versus one who sees how response work protects customer-facing systems
AutonomyWhether the person feels accountable for resultsA support engineer following rigid scripts versus one trusted to choose diagnosis path and escalation method
FeedbackWhether the person knows if the work was effectiveA release engineer waiting for quarterly review versus one who sees deployment health and downstream stability quickly

Skill variety

Low skill variety shows up when someone uses the same narrow ability all week. The role may still be hard, but it feels stale. In IT, this often happens in maintenance-heavy jobs where one person becomes the permanent owner of repetitive operational chores.

A better design doesn't mean random task rotation. It means mixing complementary kinds of work. A network engineer might still manage incidents, but also review patterns, improve documentation, and take part in tooling decisions.

Task identity

This is one of the most neglected dimensions in software work.

When people only ever handle fragments, they struggle to connect effort with outcome. The developer patches one function. Another person tests it. A third deploys it. A fourth reports the customer reaction. Nobody feels true ownership.

A stronger version gives one person or one small group a whole, identifiable piece of work. That doesn't mean no handoffs. It means the person can still say, "I built that, shipped it, and saw what happened."

If someone never sees a complete result, don't expect strong pride in the work.

Task significance

Technical roles often suffer here because impact gets hidden behind layers of systems and process.

An engineer maintaining identity services may not meet end users. A database admin may never hear from the finance team whose month-end close depends on stable performance. Once the connection disappears, the work starts to feel like abstract system care.

You improve significance by making the dependency visible. Show who relies on the service. Show which teams get blocked when it fails. Make the business consequence concrete.

Autonomy and feedback

These two deserve more attention because they often break first.

Autonomy doesn't mean no governance. It means a person has real discretion over method, sequence, or local decisions. A DevOps engineer can still work inside security standards while choosing how to automate a release path.

Feedback should come from the work, not only from the manager. Build status, deployment quality, support recurrence, adoption signals, and incident trends all tell people whether what they did worked. Delayed or vague feedback weakens the job fast.

How to Measure a Job's Motivational Potential

The useful part of the job characteristics model isn't just the five dimensions. It's that the model can be scored.

The Motivating Potential Score, or MPS, is calculated as:

MPS = [(Skill Variety + Task Identity + Task Significance) ÷ 3] × Autonomy × Feedback

A professional woman sitting at a desk reviewing data analytics and charts on a digital tablet.

That formula matters because it isn't additive. It is multiplicative. This explanation of the MPS structure for practitioners gets to the heart of it: autonomy and feedback act as force multipliers, and if either is absent, motivational potential drops to zero.

Why the formula changes management decisions

In this context, many redesign efforts go wrong.

A manager notices boredom and tries to add more task variety. Fair idea. But if the person still has no discretion and no direct sense of results, the new tasks often feel like extra load, not better work. The formula explains why. Some factors multiply the effect of the others.

That has a practical consequence for technical teams. If you're measuring work patterns and building a baseline, start with baseline metrics for continuous improvement, then inspect where decision freedom and result visibility are weakest. Those are usually the first bottlenecks to fix.

How to use MPS without turning it into paperwork

You don't need to run a heavy formal exercise to get value from the model. A simple manager-led audit works well.

Use a rough rating for each dimension and compare roles across the team. You're not looking for fake precision. You're looking for asymmetry.

  • A role with decent variety but very low autonomy: likely to produce compliance, not ownership
  • A role with strong autonomy but weak feedback: likely to produce effort without course correction
  • A role with low task identity: likely to produce detachment and handoff fatigue

Practical rule: When a role feels flat, inspect autonomy and feedback before you redesign everything else.

The point of MPS isn't to win a scoring exercise. It's to stop spreading improvement effort evenly when one missing multiplier is doing most of the damage.

Common Pitfalls When Redesigning Jobs

A team can look fully loaded and still be badly designed. Tickets get closed, standups happen, dashboards stay green, yet people start working like traffic routers instead of owners. In IT, that usually means the redesign changed volume or visibility, not the job itself.

The recurring mistake is simple. Managers add duties, reporting, or tool access and assume the role will feel richer. In practice, the person often gets more interruptions, more context switching, and the same limited control over outcomes.

Job enlargement isn't job enrichment

A support engineer who already handles password resets, device issues, and onboarding tickets will not care more about the job because you also assign licence clean-up and spreadsheet reporting. That expands the role's surface area, but it does not improve ownership.

A better redesign gives that engineer a problem they can carry from detection to resolution. For example, assign them recurring responsibility for reducing account provisioning failures, let them change the workflow, and give them direct visibility into whether the failure rate drops. That is a meaningful change in job design.

The trade-off is real. End-to-end ownership usually reduces managerial control and requires better guardrails. It also exposes weak process documentation fast. That is still preferable to keeping people busy in narrow, fragmented work.

The one-dimension trap

Technical managers often fix the safest part of the job and leave the rest untouched.

They rotate tasks to create variety. They talk more about customer impact to raise significance. They publish a dashboard and call it feedback. Those changes can help, but they rarely hold if the role still lacks discretion, clear boundaries, or a complete piece of work.

Three mistakes show up often in engineering and IT operations:

  • Adding variety without reducing fragmentation: People touch more systems, but complete fewer meaningful outcomes.
  • Giving autonomy without support: The person gets freedom, but no operating limits, no coaching, and no useful signals to judge whether the choice worked.
  • Delivering feedback too late: Annual reviews and quarterly retrospectives arrive long after the work can be corrected.

I see the first mistake a lot in platform and support teams. Leaders spread work across more tools because repetition looks like the problem. The actual problem is usually broken flow. If one engineer bounces between Jira, Slack, email, a ticket queue, an admin console, and three browser tabs just to finish one routine request, more task variety makes the role worse.

Feedback that doesn't land

Feedback has to connect to the work itself.

"Nice job this quarter" is polite, but it tells an SRE nothing about whether a new runbook shortened incident recovery. A release manager does not learn much from a year-end rating if they still cannot see whether their checklist reduced rollback risk or rework in the last five releases.

Good job design builds shorter loops. Engineers should be able to see the effect of a change in service metrics, ticket reopen rates, deployment failure patterns, internal customer escalations, or another direct operational signal. If the person cannot see results, motivation gets replaced by guesswork.

That is one reason modern telemetry matters. Teams can optimize work patterns with data transparency and WhatPulse and spot whether a redesign improved ownership or just created more tool switching. Used well, this kind of usage data helps managers test job design decisions against actual work patterns instead of relying on good intentions.

The practical standard is straightforward. If a redesign adds tasks but does not improve autonomy, task ownership, or result visibility, it is probably adding load, not improving the job.

Using Workplace Telemetry to Improve Job Design

The reason the job characteristics model feels newly useful today is simple. Technical teams now produce enough behavioural data to inspect job design without guessing.

You can see where work fragments, where people lose focus, where tools create friction, and where result visibility is missing. That's much better than relying only on annual surveys or manager intuition.

A professional man analyzing complex data visualizations on multiple computer screens in a modern office workspace.

In the Netherlands, this matters in a measurable way. Roles enriched through JCM principles showed 18% higher employee engagement, and a TNO study of 5,000 Dutch knowledge workers found that autonomy ratings above 5.5/7 reduced stress by 22%, while feedback improved performance quality by 15%, as cited in AIHR's summary of Dutch JCM findings.

What telemetry can reveal about each dimension

Used properly, workplace telemetry doesn't tell you whether someone is "working hard". It tells you how the job behaves.

  • Skill variety: Look at application mix and work pattern diversity over time. If a role lives in one narrow tool set all day every day, the job may be too repetitive. If the person jumps across too many unrelated tools, variety may be fragmentation.
  • Task identity: Compare whether people get sustained blocks attached to one project profile or whether their day is broken into constant micro-contributions across unrelated queues.
  • Task significance: Pair usage patterns with operational context. If an engineer spends most of the week on systems that keep revenue, support, or security moving, make that dependency visible.
  • Autonomy: Inspect whether people can choose practical methods. In some teams, mandated workflows force everyone into the same path even when the work differs. That usually shows up as rigid tool dependence and high workaround behaviour.
  • Feedback: Good telemetry gives people direct signals about outcomes. Focus patterns, adoption shifts, incident recurrence, and tool usage changes can all tell a team whether a process change improved the work.

The key is transparency. If you're using data to redesign roles, say so plainly. People need to know the goal is better work design, not hidden surveillance. In this context, data transparency and work pattern optimisation becomes a useful frame for the management side of the discussion.

A concrete example from an engineering team

Take a backend engineer whose day seems "full". Telemetry shows heavy IDE time, frequent context switching into chat and ticketing, and lots of short bursts across several services. A manager could read that as responsiveness.

A better reading is that task identity is broken. The engineer isn't carrying a whole feature. They're acting as overflow capacity for everything.

The fix might be small:

  1. Move recurring support interrupts to a rotation.
  2. Give the engineer ownership of one service improvement for a sprint.
  3. Expose direct operational feedback after release.
  4. Keep the number of active workstreams low enough that the whole piece remains visible.

That is job redesign in practical terms.

Here’s a useful short explainer if you want a quick visual take before applying it with your own team:

What works and what doesn't

A few rules hold up well in practice:

Measure work patterns to diagnose the role, not to score the person.

  • What works: using telemetry to spot fragmentation, unclear ownership, and delayed feedback loops

  • What works: discussing the data with the team member who lives inside the role

  • What works: changing one design element at a time so you can tell what helped

  • What doesn't: treating app activity as motivation

  • What doesn't: forcing standardisation where the job would benefit from local choice

  • What doesn't: building dashboards that managers can see but employees can't use themselves

The model works best when data gives shape to a conversation that managers and engineers can both recognise.

Your First Steps to Applying the Model

Don't start with a company-wide redesign. Start with one role that already feels off.

A simple first pass

Use this checklist in your next team review:

  1. Pick one or two roles, not the whole org. Choose the jobs where energy is low, work feels repetitive, or ownership seems muddy.
  2. Rate the five dimensions informally. Don't overbuild it. Ask where the role is thin on variety, whole-task ownership, visible impact, discretion, or direct result.
  3. Find the weakest point. If one factor is clearly dragging the role down, work there first.
  4. Check the evidence in the work pattern. Look for interruptions, fragmented project time, tool sprawl, or delayed outcome visibility.
  5. Run one small experiment for two weeks. Shift support rotation, assign one complete work item, expose a direct result metric, or loosen one unnecessary constraint.
  6. Review with the employee. Ask what changed in the work itself, not just whether output moved.

Keep the experiment small

Small changes teach faster than broad restructures.

If you try to improve all five dimensions at once, you won't know which change mattered. If you focus on one blocked role and one practical adjustment, you'll get cleaner feedback and less disruption.

Most managers already have enough information to start. They usually need a better lens, not a bigger programme.

Job Characteristics Model FAQs

Does the job characteristics model still work in agile teams

Yes. Agile changes the cadence of work, not the basic design questions. Sprint structure can still produce low task identity, weak significance, or poor feedback if people only handle fragments.

Is this the same as OKRs

No. OKRs define goals. The job characteristics model examines how the role itself is built. A team can have excellent OKRs and still work in jobs that feel thin, over-controlled, or disconnected from outcomes.

Does it only apply to engineers

No. It fits product managers, service desk staff, technical writers, analysts, and operations leads as well. Any knowledge role can be inspected through the same five dimensions.

Can too much autonomy become a problem

Yes, in practice. If a person gets freedom without boundaries, support, or feedback, the role can feel vague rather than motivating. Good autonomy has guardrails.

Is feedback the same as manager praise

No. In this model, feedback means the work itself gives clear information about effectiveness. Manager praise can help morale, but it doesn't replace direct knowledge of results.


If you're trying to understand how work happens across your team, WhatPulse gives you a privacy-first way to see application usage, focus patterns, project time, and process bottlenecks without capturing content. That kind of visibility makes the job characteristics model far easier to apply in real teams, especially when you're redesigning roles based on evidence instead of hunches.

Start a free trial