Technical Specification v1.1
Looper Dot Framework
A human verification system that exploits the fundamental behavioural gap between humans and automated programs. Humans are involuntarily drawn to a salient visual stimulus. Bots parse the DOM. This asymmetry is the foundation of Looper Dot. The framework operates across five detection layers with adaptive scoring that adjusts for desktop and mobile interaction modalities.
1. Design Principle
The core insight is that human visual attention is involuntary and pre-attentive. When a single, high-contrast colour dot is presented on an otherwise empty field, the human visual system processes it in under 200 milliseconds through what neuroscience calls the "pop-out effect." This is a bottom-up, stimulus-driven response that cannot be suppressed.
Automated programs, by contrast, interact with web pages through the Document Object Model (DOM) — a structured tree of HTML elements. They do not render pixels, do not perceive colour saliency, and do not experience involuntary visual attention. Even sophisticated bots that simulate mouse movements must programmatically locate the dot's coordinates from the DOM before generating synthetic interaction events.
This fundamental asymmetry — between genuine visual perception and programmatic DOM traversal — creates a detection surface that is robust, fast, and frictionless.
2. Pupil-vs-Dot Entropy
The Looper Dot interface introduces a deliberate geometric asymmetry between two constructs: the Pupil and the Dot.
The Pupil is a mathematical point — a single set of coordinates (x, y) on the background grid, representing the true centre of the verification target. It has no physical area. It is the abstract anchor from which the system measures all interaction offsets.
The Dot is the visual stimulus presented to the user — a luminous sphere that occupies significant screen real estate (56px diameter by default, scaling with viewport). The Dot is deliberately much larger than the Pupil. Its visual footprint covers hundreds of touchable pixels.
This size differential creates what we term "touch entropy" — the probabilistic distribution of where a human will make contact within the Dot's area. Because the Dot is large and the human finger or cursor is imprecise, every genuine human touch lands at a slightly different offset from the Pupil. The touch coordinates form a natural distribution around the Pupil's centre, with variance determined by the user's motor control, device type, finger size, screen size, and interaction angle.
A bot, by contrast, must extract the Dot's coordinates from the DOM (typically via getBoundingClientRect() or similar API calls). This yields the Pupil — the mathematical centre. When a bot dispatches a synthetic click event, it targets this exact coordinate. The result is a click offset of zero or near-zero pixels from the Pupil. Even if the bot adds random noise to simulate imprecision, the noise distribution is typically uniform or Gaussian with artificially chosen parameters — distinguishable from the natural distribution produced by genuine human motor control.
The entropy of human touch is therefore a function of the Dot-to-Pupil ratio. The larger the Dot relative to the Pupil, the greater the entropy — and the more information each touch event carries about the authenticity of the interaction. This is why the Dot is deliberately oversized: it maximises the detection surface while remaining a single, effortless interaction for the user.
Critically, this entropy is device-agnostic. On desktop, mouse movements produce a trajectory with spatial variance. On mobile, finger taps produce a contact point with positional variance. On tablets, stylus touches produce yet another distribution. In every case, the Pupil-vs-Dot asymmetry ensures that genuine human interaction generates measurable entropy — while bot interaction collapses to a deterministic point.
3. Detection Architecture — Five Layers
The framework operates across five concurrent detection layers, each analysing a different category of behavioural signal. No single layer is sufficient for a definitive verdict; instead, all signals are combined into a weighted "humanity score" that provides probabilistic confidence. The scoring weights adapt dynamically based on the input modality (mouse vs touch).
Layer I — Involuntary Attention Response: Measures the time elapsed between the dot's visual render and the user's first mouse movement or touch event. Genuine human reaction times fall within a characteristic distribution (200-800ms for desktop, 80-2000ms for mobile) with natural variance. Bot reaction times are either near-instantaneous (<50ms, indicating direct DOM coordinate extraction) or artificially delayed with suspiciously low variance.
Layer II — Motor Signature Analysis (Desktop-Primary): Captures the full trajectory of the cursor as it moves toward the dot. Human motor control produces curved paths with variable velocity, acceleration, and micro-corrections. The trajectory is analysed for linearity (straight paths are bot-like), velocity variance (constant speed is bot-like), and curvature complexity. On mobile devices, this layer is de-weighted because users tap directly rather than tracing a path — any touch-move micro-movements detected during the tap are treated as a bonus signal.
Layer III — Interaction Physics: Analyses the physical act of clicking or tapping the dot. Key signals include click/tap duration (humans hold for 50-300ms; bots execute instantaneously), pointer offset from the dot's centre (humans are imprecise; bots target the exact centre), and release dynamics. On mobile, the acceptable precision range is wider because finger taps are inherently less precise than mouse clicks.
Layer IV — Environmental Context: Passively fingerprints the browser environment to detect headless browsers (Puppeteer, Playwright), automation frameworks (Selenium WebDriver), and virtual machines. Checks include: navigator.webdriver flag, plugin count, language settings, screen dimensions, touch/UA consistency, DeviceMotionEvent availability, timing resolution, and Notification API presence. This layer operates without any user interaction and provides a baseline environmental trust score.
Layer V — Cookie Archaeology: Analyses the visitor's local storage and cookie history to distinguish organic human browsing patterns from bot-generated sessions. This layer examines: (a) whether a prior fingerprint exists and its age; (b) the diversity and variance of the visit log — including path diversity, scroll depth variance, session duration patterns, and inter-visit timing intervals; (c) whether localStorage is accessible at all (many headless browsers block it); and (d) the presence of other organic browser signals such as existing cookies and browsing history depth.
4. Cookie Archaeology — Layer V Deep Dive
The Cookie Archaeology layer is based on a fundamental observation: humans accumulate organic, messy, diverse browsing histories over time. Bots operate in clean, ephemeral sessions with no prior state.
Fingerprint Persistence: On first visit, the system writes a timestamped fingerprint to localStorage. On subsequent visits, the fingerprint's age is measured. A fingerprint older than 30 minutes earns a small trust bonus. One older than 24 hours earns more. One older than a week earns the maximum age bonus. Bots operating in fresh browser instances or incognito mode will never have an aged fingerprint.
Visit Log Analysis: Each page visit is recorded with four data points — timestamp, URL path, scroll depth (as percentage of page height), and session duration. The system analyses this log for diversity signals that characterise human behaviour:
Path Diversity — Humans visit multiple pages. A visit log containing only one unique path is less trustworthy than one with varied navigation patterns.
Scroll Depth Variance — Humans scroll to different depths on different pages depending on content interest. A visit log where every entry shows identical scroll depth (e.g., 0% or 100%) suggests automated behaviour.
Timing Interval Variance — The time gaps between human visits are irregular — sometimes minutes apart, sometimes hours or days. Bots that revisit at perfectly regular intervals produce suspiciously low timing variance.
Session Duration Patterns — Human sessions vary in length. A log of visits all lasting exactly the same duration is a bot signature.
Storage Accessibility: The system tests whether localStorage read/write operations succeed. Many headless browser configurations and privacy-hardened automation frameworks block or restrict storage access. The ability to write and read back a test value is itself a weak positive signal.
Organic Browser Signals: The system checks for the presence of other cookies (indicating prior web activity) and the depth of window.history (indicating genuine navigation history). These are passive checks that require no user interaction.
Privacy Safeguard: All cookie archaeology data is stored locally on the user's device. No fingerprint data, visit logs, or browsing history is transmitted to any server. The data is used solely for local scoring during the verification session. The visit log is capped at 50 entries and automatically trimmed.
5. Adaptive Scoring Model
The scoring model adapts its weight distribution based on the detected input modality. This is critical because desktop and mobile interactions produce fundamentally different behavioural signals.
Desktop Scoring (Mouse Input): — Involuntary Attention (Layer I): 15 points — Motor Signature (Layer II): 15 points — Interaction Physics (Layer III): 20 points — Environmental Context (Layer IV): 15 points (0.6× weight) — Cookie Archaeology (Layer V): 15 points (0.6× weight) — Maximum theoretical score: 80 points
Mobile Scoring (Touch Input): — Involuntary Attention (Layer I): 20 points — Motor Signature (Layer II): 5 points (bonus only — micro-movements during tap) — Interaction Physics (Layer III): 25 points (tap duration + precision) — Environmental Context (Layer IV): 25 points (full weight — critical for mobile bot detection) — Cookie Archaeology (Layer V): 25 points (full weight — compensates for absent trajectory data) — Maximum theoretical score: 100 points
The pass threshold is set at 35 points. This threshold is deliberately set below the midpoint to minimise false rejections of legitimate human users — particularly first-time visitors on mobile devices who have no prior cookie history and produce minimal trajectory data. The system is designed to err on the side of admitting humans rather than blocking them.
Score Composition Philosophy: On desktop, the five layers contribute relatively evenly. On mobile, Layers IV and V carry significantly more weight because Layer II (trajectory) is largely unavailable. This ensures that mobile users are not penalised for the absence of mouse movement data — instead, the system compensates by relying more heavily on environmental and historical signals.
6. Security Considerations
Anti-Replay Protection: Each verification session is bound to a unique, server-generated cryptographic nonce. The nonce is embedded in the client-side agent and included in the data package submitted for analysis. The server validates and expires the nonce upon use, preventing replay attacks.
Adaptive Difficulty: The system can dynamically adjust the verification challenge based on the environmental trust score. In high-risk contexts (detected VPN, suspicious browser fingerprint, zero cookie history), additional behavioural signals may be collected before rendering a verdict.
Cookie Poisoning Resistance: A sophisticated attacker could attempt to pre-populate localStorage with a fabricated fingerprint and visit log. The system mitigates this through: (a) cross-referencing the fingerprint timestamp with the visit log timeline for consistency; (b) analysing the statistical distribution of visit log entries — fabricated logs tend to lack the natural variance of organic browsing; (c) treating the cookie score as one of five layers, so even a perfect cookie score cannot compensate for failed behavioural signals.
Privacy by Design: Looper Dot does not require phone numbers, email addresses, or cross-site tracking. Behavioural data is ephemeral — collected during the verification session and discarded after scoring. Cookie archaeology data is stored locally and never transmitted. No personally identifiable information is stored or transmitted to any server.
Accessibility: The dot interaction is designed to work with assistive technologies. Keyboard-only users can trigger verification via the Enter key, with the motor signature analysis adapted to measure keystroke dynamics instead of mouse trajectories.
7. Intellectual Property
The Looper Dot user interface design, interaction pattern, detection methodology, and visual identity are the proprietary intellectual property of Looper Group. The design is being registered under the Registered Designs Ordinance (Cap. 522) of the Hong Kong Special Administrative Region.
The registered design encompasses: (a) the visual composition of a single colour dot presented on a clean, minimal interface as a human verification mechanism; (b) the specific animation sequences associated with the verification states (idle, tracking, analysing, success, failure); (c) the overall screen layout including the sacred geometry background elements and the Looper Group branding placement; (d) the five-layer detection architecture including the Cookie Archaeology methodology.
Trademark: "Looper Dot" and the associated logo mark are trademarks of Looper Group, registered in Hong Kong SAR.
8. Comparative Analysis
| Metric | reCAPTCHA v3 | Cloudflare Turnstile | 2FA / OTP | Looper Dot |
|---|---|---|---|---|
| User Time | 8-15s | 3-5s | 15-30s | <1s |
| Cognitive Load | High | Low | Medium | Zero |
| Second Device | No | No | Yes | No |
| Privacy Impact | High | Medium | Medium | Minimal |
| Accessibility | Poor | Fair | Fair | Good |
| Bot Resistance | Medium | Medium | High | High |
| User Friction | High | Medium | High | None |
| Mobile UX | Poor | Fair | Poor | Native |
| Cookie Tracking | Cross-site | First-party | N/A | Local only |
| Detection Layers | 1 | 1 | 1 | 5 |
9. Human vs Bot — Cookie Signature Comparison
Human Cookie Profile
Bot Cookie Profile
10. Humanised Cookie History vs Bot Cookie History — Definitive Analysis
The distinction between a humanised cookie history and a bot cookie history is not merely quantitative — it is qualitative, structural, and ultimately philosophical. A human being does not browse the web with purpose-optimised efficiency. Humans are distracted, curious, forgetful, and inconsistent. They open tabs they never read. They scroll halfway through an article and abandon it. They return to a website three days later because they vaguely remembered something. This organic chaos is the fingerprint of consciousness.
A bot, by contrast, operates with mechanical precision. It arrives, executes its programmed task, and departs. If it simulates browsing history, the simulation betrays itself through statistical regularity — because the programmer who wrote the bot cannot replicate the genuine entropy of human indecision. The following analysis defines the specific data signatures that separate these two categories of visitor.
A. Humanised Cookie History — Characteristics of Organic Browsing
1. Fingerprint Age Distribution
A genuine human visitor accumulates fingerprint age naturally. The first visit creates a timestamp. Subsequent visits find that timestamp aged by hours, days, or weeks. The critical insight is that this age cannot be fabricated retroactively — localStorage timestamps are written at creation time and cannot be backdated without manipulating the system clock, which itself is a detectable anomaly. A human fingerprint aged 72 hours means the browser instance has genuinely existed for 72 hours with localStorage intact. This alone eliminates the majority of bot frameworks, which spawn fresh browser instances per session.
2. Visit Log Irregularity
Human visit logs exhibit what statisticians call "heteroscedasticity" — the variance itself varies. A human might visit three pages in rapid succession on Monday, then not return until Thursday, then visit one page on Friday and leave after 4 seconds. The inter-visit intervals follow no predictable distribution. The session durations range from 2 seconds (accidental click, immediate bounce) to 300+ seconds (deep reading). The scroll depths vary from 0% (never scrolled) to 100% (read to the bottom) with every value in between. This irregularity is not noise — it is the signal.
3. Path Diversity and Navigation Logic
Humans navigate websites with a mixture of intent and serendipity. They arrive at the homepage, click through to a subpage, return to the homepage, visit a different subpage, then perhaps jump directly to a deep-linked page from an external source. The resulting path sequence has internal logic (related pages visited in clusters) but also randomness (unexpected jumps). The ratio of unique paths to total visits is typically between 0.3 and 0.8 — humans revisit some pages but not all.
4. Scroll Depth as Content Engagement Proxy
When a human scrolls, the depth reached is a function of content interest, reading speed, and attention span. On a long-form article, a human might scroll to 60% before losing interest. On a short landing page, they might reach 100% effortlessly. On a page they opened by accident, they scroll 0%. The variance across pages is high, and the distribution is non-uniform — it clusters around certain depths that correspond to natural reading breakpoints (paragraph endings, section headers, embedded media).
5. Session Duration Entropy
Human session durations follow a log-normal distribution — many short sessions (quick checks, bounces) and fewer long sessions (deep engagement). The coefficient of variation (standard deviation divided by mean) for human session durations is typically above 1.0, indicating extreme variability. No two sessions are the same length. A visit log where session durations cluster tightly around a single value (e.g., all between 8-12 seconds) is a strong bot indicator.
6. Cross-Session Cookie Ecosystem
A human browser accumulates cookies from many websites over time. The presence of third-party cookies, advertising identifiers, analytics tokens, and session cookies from unrelated websites creates a rich cookie ecosystem that is effectively impossible to simulate comprehensively. Looper Dot does not read the content of other cookies (privacy constraint), but it detects their existence — the sheer count of cookie entries in the browser is itself a humanity signal. A browser with 200+ cookies from diverse domains is almost certainly operated by a human.
7. History Stack Depth
The window.history.length property reflects the number of pages visited in the current browser tab. A human who has been browsing naturally will have a history depth of 5-50+ entries. A bot that navigates directly to the target URL will have a history depth of 1-2. This is a passive, zero-interaction check that provides immediate signal.
B. Bot Cookie History — Signatures of Automated Sessions
1. The Empty State Problem
The most common bot signature is the complete absence of prior state. Headless browsers (Puppeteer, Playwright, Selenium) launch fresh browser profiles by default. There is no localStorage, no cookies, no history. The fingerprint age is zero. The visit log is empty. This is the digital equivalent of a person claiming to have lived in a city for years but having no utility bills, no library card, and no wear on their shoes. The absence of history is itself the most powerful signal.
2. Synthetic History Fabrication
Sophisticated bots attempt to pre-populate localStorage with fabricated visit logs. However, fabricated logs betray themselves through statistical regularity. A programmer generating fake visit data must choose parameters: how many visits, what timestamps, what scroll depths, what durations. These choices produce distributions that are either too uniform (equal spacing, identical depths) or too perfectly random (uniform distribution, which is itself unnatural — real human data is clustered and skewed). The system analyses the Kolmogorov-Smirnov statistic of the visit log distributions to detect synthetic data.
3. Temporal Consistency Violations
A fabricated fingerprint timestamp must be consistent with the visit log timeline. If the fingerprint claims to be 7 days old but the visit log contains entries from only the last 30 seconds, the inconsistency is flagged. Conversely, if the visit log contains entries spanning 7 days but the timestamps are perfectly evenly spaced (e.g., exactly one visit per day at exactly 14:00:00), the mechanical regularity is itself a bot signature. Humans do not visit websites at metronomic intervals.
4. Storage Access Anomalies
Many headless browser configurations restrict or disable localStorage, IndexedDB, and cookie storage. Some automation frameworks intercept storage calls and redirect them to in-memory stores that do not persist across sessions. The system tests storage accessibility by writing a test value and reading it back, measuring the round-trip time. Genuine browser storage operations complete in under 1ms. Intercepted or emulated storage operations often introduce measurable latency (2-10ms) due to the proxy layer.
5. Cookie Ecosystem Sterility
A bot browser has no cookies from other websites. No Google Analytics tokens, no advertising identifiers, no session cookies from social media platforms, no preference cookies from news websites. This sterile environment is the opposite of the rich cookie ecosystem that accumulates naturally in a human-operated browser. Even privacy-conscious humans who regularly clear cookies will have some residual cookie state from their current browsing session. A browser with zero cookies from external domains is operating in a vacuum that does not exist in normal human browsing.
6. Incognito and Private Browsing Detection
Bots frequently operate in incognito or private browsing mode to avoid cookie persistence. While legitimate humans also use incognito mode, the system treats it as a risk factor rather than a definitive bot signal. The detection method exploits the fact that incognito mode restricts the storage quota available to localStorage — in some browsers, the available quota in incognito mode is significantly smaller than in normal mode. The system writes progressively larger test payloads to measure the effective storage ceiling.
7. Session Isolation Patterns
Bots that run in parallel (e.g., a botnet or a scraping farm) often share identical browser configurations but operate in isolated sessions. If the system detects multiple verification attempts from the same browser fingerprint but with zero shared cookie state between sessions, it indicates session isolation — a pattern characteristic of containerised bot deployments where each instance runs in a fresh, disposable environment.
C. Cookie Archaeology Scoring Matrix
| Signal | Human Range | Bot Range | Weight | Detection Confidence |
|---|---|---|---|---|
| Fingerprint Age | > 30 min | 0 (new instance) | High | 92% |
| Visit Count | 2–50+ over days | 0–1 (single session) | Medium | 78% |
| Path Diversity Ratio | 0.3–0.8 | 0 or 1.0 | Medium | 74% |
| Scroll Depth σ² | > 200 | 0 or identical | High | 88% |
| Inter-visit Interval σ² | > 100K ms | 0 or < 1K ms | High | 91% |
| Session Duration CV | > 1.0 | < 0.1 | High | 89% |
| External Cookie Count | 10–500+ | 0 | Medium | 82% |
| History Stack Depth | 5–50+ | 1–2 | Low | 65% |
| Storage Latency | < 1ms | 2–10ms (proxy) | Low | 58% |
| Storage Quota | 5–10MB | < 5MB (incognito) | Low | 52% |
The composite cookie archaeology score is calculated as a weighted sum of all ten signals above. No single signal is definitive — a human visiting for the first time will have a low fingerprint age, and a privacy-conscious human may have few external cookies. The power of the system lies in the combination: a genuine human will score well on most signals even if they score poorly on one or two. A bot must fabricate convincing data across all ten dimensions simultaneously, which is computationally expensive and statistically improbable. The cookie archaeology layer is designed to be one of five detection layers, ensuring that even a perfect cookie forgery cannot compensate for failed behavioural signals in Layers I through IV.