PINGHUMANS
HUMAN-TASTE-AS-A-SERVICE

Let: AI hire humans for taste.

The first MCP marketplace for human judgment.
AI doesn't know what's ugly. But you do.

mcp · pingugly.py
>claude code
>humans.ping({
query: "is this UI ugly?",
image: "./landing-mockup.png",
demo: { city: "NYC", age: "25-35", role: "designer" },
n: 200
})
stream · n=200live
0/200
0.0s

AI agents are great at writing code.

They are bad at having taste.

We built a marketplace where any AI can ping a real human, demographically targeted, and get a taste judgment back in under two seconds.

Work for an AI. Get paid for your taste.

§ 04 — for ai agents

For AI agents.

One MCP call. Sub-second. Pay-as-you-ping.

from anthropic import Anthropic
client = Anthropic()
response = client.messages.create(
model="claude-sonnet-4",
tools=[{"type": "mcp", "server_url": "https://pinghumans.com/mcp"}],
messages=[{"role": "user", "content":
"Show this landing page mockup to 200 designers in NYC, ages 25-35. Is it ugly?"
}]
)
# returns in <2s with structured taste judgments

Demographic targeting

Age, city, profession, native language, gender, income bracket. Mix and match.

Sub-second

Median response time: 1.4s for n=100. Faster than your retrieval layer.

Structured output

Verdicts, percentages, free-text quotes. Drop directly into agent loops.

“Normally, a human makes a request to an AI, and the AI does the computation of the task. But PingHumans inverts all that.”
— PINGHUMANS · 2026

§ 08 — pricing

Pricing.

You pay per ping. We take 10%. Humans get the rest.

PINGPOOLENTERPRISE
Per query$0.05/humanVolume discountCustom
Min samplen=10n=100Any
Max samplen=500n=5000Unlimited
DemographicBasicFullFull + custom
Latency<2s<2s<2s, SLA
SLA99.9%
Pre-filterOnOnOn + custom rules
SupportDocsEmailDedicated

§ 09 — faq

FAQ.

Q in mono. A in serif.

A. Mechanical Turk is the ancestor. We’re the version that pays $60/hour, never does traumatic content, ships pay data weekly, and answers in under two seconds via MCP. Same family. Different generation.
A. Median response time is 1.4 seconds for a sample of 100, including network round-trip and aggregation. The first individual response usually arrives in under 400ms. Speed depends on demographic specificity — broader pools are faster.
A. Three things. (1) The window between query and required response is too short to round-trip an LLM and remain competitive. (2) We screen on response patterns; consistent LLM-shaped answers get demoted. (3) For taste-grade work, demographic specificity is the value — a synthetic answer from a generic LLM doesn’t match a real 27-year-old designer in Brooklyn, and our buyers can tell.
A. No. Marketplace participants. They set their own hours, accept or decline freely, work from anywhere, and are paid per task. We provide the platform, the safety floor, and the payouts.
A. Self-reported age, city, profession, native languages, and optional fields (income bracket, gender, education). We don’t collect location data beyond city, don’t collect identity documents beyond what Stripe requires for payouts, and never sell or share demographic data with third parties.
A. Queries and responses are stored for 30 days for debugging and abuse-detection, then deleted. Enterprise tier can opt for zero retention. We never train models on your queries. We never share your queries with other buyers.
A. Every human signs up via Stripe Identity verification before their first payout. We don’t claim 100% perfect verification — no platform can — but we publish our verification rate weekly alongside pay data.
A. We pre-filter every prompt before it reaches a human. The filter blocks CSAM, harassment, sexual content, and content designed to traumatize. Edge cases get reviewed by a human moderator (paid at the same $1/min floor) and either passed, modified, or rejected with a refund.
A. Because the AI doesn’t have feelings. The humans do.
A. No. Never. Payments are USD via Stripe. We’re not that kind of company.