This page is shown in English while a reviewed translation for your locale is prepared.
How We Think About Layered Information Security at Perspectis AI
A plain-language Perspectis AI perspective on defence in depth: granular access, ethical walls, minimisation, monitoring, AI inside the same guardrails—and honest framing on certification versus product design.
A plain-language guide for leaders, clients, and teams (April 2026)
The short answer
Serious professional organisations do not win on one security control. We treat information security as defence in depth: several independent mechanisms that reinforce each other—access choices, policy hierarchy, ethical walls, confidentiality handling, careful treatment of personally identifiable information, hardened network edges, monitoring, and auditability—so that if one layer misbehaves or is bypassed, others still constrain harm.
This note explains how we talk about that posture with non-technical stakeholders. It is not a certificate, audit opinion, or exhaustive control catalogue. Formal outcomes such as information-security management certification, independent trust reporting, and regulator-specific legal conclusions depend on how Perspectis AI is deployed, which subprocessors are in scope, and what evidence a customer maintains with auditors and counsel.
Why “many layers” is the right mental model
A useful picture is a research campus with different clearance levels: badges at the gate, locked labs inside, rules about what may leave the building, cameras where appropriate, and separate teams that are not allowed to compare notes when policy says they must not. No single measure does all the work; the combination is what creates resilience.
That is the spirit behind Perspectis AI: layered controls aligned to how regulated and reputation-sensitive organisations actually operate—not a promise that any one feature makes risk disappear.
Layer 1 — What the platform is allowed to see and do (granular access)
Organisations differ in how much automation they want against email, calendars, documents, and related channels. We support granular access patterns so teams can choose, in plain terms:
- No access — the platform does not touch that channel for substantive work.
- Metadata-focused access — enough context to coordinate work (for example timing and routing signals) without pulling full message bodies where policy forbids it.
- Content access — where policy and contracts allow richer assistance.
Additional switches govern whether natural-language processing, deeper analysis features, cross-product sharing, and similar capabilities are permitted for each organisation. We describe this as a policy-controlled surface area: the same product can be stricter or more permissive depending on what the customer chooses and what professional rules require.
Layer 2 — When two policies disagree (clear hierarchy)
In real firms, rules exist at many levels: firm-wide standards, client-specific requirements, matter-specific restrictions. Those rules can conflict. We implement preemption patterns so the effective outcome is predictable—sometimes the stricter rule wins, sometimes a higher authority caps what lower levels may allow, and sometimes policies merge toward the safest effective outcome when multiple rules apply at once.
Predictability matters as much as strictness. Ethical walls (information barriers) only work when people—and systems—know which rule actually governs a given request.
Layer 3 — Ethical walls and separation duties
Ethical walls are the professional-world idea that certain people, teams, or AI-assisted workflows must not see or combine particular information. We treat walls as enforceable separation, not as a training exercise for the model. Barriers are evaluated with audit-friendly semantics so “do not cross this line” is a platform concern, not a hope embedded in a prompt.
This is especially relevant where confidentiality levels (public, internal, highly sensitive, and similar gradations used in professional practice) must flow through workflows consistently.
Layer 4 — Personally identifiable information and minimisation
Personally identifiable information is any data that can identify a person directly or indirectly. We invest in detection and sanitisation on supported paths so many artefacts store redacted or hashed representations where that is appropriate—while still being honest that defence in depth also relies on tenant isolation, access controls, and encryption. Not every field in every workflow passes through the same sanitizer; we avoid marketing absolutes that internal engineering assessments would not support.
The design intent is minimisation: reduce unnecessary sensitive footprint, keep professional records where the product function requires them, and gate deeper analysis behind the same access and wall policies described above.
Layer 5 — The application edge and operational monitoring
Customer-facing systems benefit from disciplined HTTP edge practices—security-oriented headers, carefully constrained browser integration rules, and operational surfaces for monitoring classes of abuse such as prompt injection attempts against governed routes. We also invest in observability patterns (metrics, alarms, structured logs) so operators can detect anomalies and respond—understanding that the exact dashboards and thresholds are deployment-specific.
Layer 6 — AI-assisted work inside the same guardrails
Personal Agent Representative and Executive Personal Assistant capabilities are intentionally not a separate “wild west.” The same access, walls, confidentiality, and human-in-the-loop themes that apply elsewhere apply to assisted actions: approvals where stakes are high, durable records where continuity matters, and no pretence that clever wording can override permissions.
The Perspectis AI Demo Environment is where we make that story tangible: end-to-end scenarios that show how assistance sits inside professional controls—not beside them.
Compliance language we use carefully
Stakeholders often ask how this maps to familiar frameworks. We align product and engineering work to common themes (for example international information-security management annex topics, trust-service criteria used in independent assurance reports, European privacy engineering expectations, and healthcare-style safeguard patterns where deployments target those regimes). We are explicit that mapping is not the same as certification: auditors issue opinions about organisations and production boundaries, not about a source repository snapshot.
| Topic | What the product posture can honestly claim | What remains customer and auditor work |
|---|---|---|
| Certification and attestation | Strong design alignment and diligence-friendly documentation | Formal certificates, in-scope systems, policies, operating evidence |
| Encryption | Industry-standard patterns for data in transit and at rest when correctly configured | Key management, rotation, and infrastructure choices per deployment |
| Training use | Architectural separation between customer workloads and Perspectis-owned model training patterns; third-party model providers remain governed by their terms and customer choices | Customer review of subprocessors, data processing agreements, and retention modes |
| AI oversight | Human-in-the-loop, audit logs, tool governance, and barrier-aware paths where implemented | Firm-specific privilege, ethics, and local law conclusions |
Honest limits (because credibility is a control too)
We call out a few limits plainly:
- No “perfect security.” Any real system can have defects, misconfiguration, or novel attacks.
- Assurance is joint work. Customers must operate identity, devices, and business processes consistent with their own risk tolerance.
- Scores and monitors from internal validation utilities are operational signals, not permanent marketing grades—cryptographic posture evolves with standards and infrastructure choices.
Sources (public references for frameworks, not product claims)
- National Institute of Standards and Technology: Cybersecurity Framework
- International Organization for Standardization: ISO/IEC 27001 — Information security management
- American Institute of Certified Public Accountants: Trust Services Criteria overview (SOC)
- National Institute of Standards and Technology: Artificial Intelligence Risk Management Framework (AI RMF 1.0)
This document is written for external, non-technical readers. We maintain authoritative technical assessments and implementation references for customer diligence under appropriate confidentiality.

