Episode 47 — Map OSI and TCP/IP Models to Security Controls
A brief anchor helps before we place controls. The Open Systems Interconnection (O S I) model describes seven conceptual layers from the physical wire to the application that a human or program interacts with, while the Transmission Control Protocol / Internet Protocol (T C P / I P) model condenses those concerns into four transportable buckets. The value of these abstractions is design discipline: they help us isolate causes, align countermeasures with where risks enter, and avoid mixing responsibilities that produce brittle systems. When a failure shows up as garbled frames, we look to the lower layers; when it appears as broken sessions or corrupted payloads, we focus higher. Layers guide both prevention—by placing controls at the first effective choke point—and verification—by telling us which signals should prove a control is working.
At the physical layer, the controls are wonderfully unglamorous and absolutely decisive. Cabling pathways are locked and labeled; patch panels and Intermediate Distribution Frames are in access-controlled spaces; unused switch ports are administratively shut and, in sensitive areas, covered with physical port locks. Environmental sensors track door openings and tamper states, while camera coverage shows who touched what and when. Inspection does not guess: it walks the closets, tests that dark ports truly refuse link, confirms that spare Small Form-factor Pluggables are inventoried, and checks work orders against changes observed on the floor. The evidence that something is protected at this layer is tangible—keys, badges, seals, and denial of link—not just a policy line.
Moving to the data link, we shift to how neighbors share a medium and how switches forward frames. Media Access Control (M A C) filtering blocks unknown devices in contained networks, while Virtual Local Area Networks (V L A N s) separate broadcast domains to reduce bleed between tenants or roles. Switch hardening removes dangerous defaults: management on out-of-band networks, disabled unused services, storm control to dampen floods, and Dynamic Host Configuration Protocol snooping to kill rogue address assignment. The proof lives in switch running-configs, port-security violation counts, and sampled frame captures showing correct tagging at trunk boundaries. A small but meaningful test is to plug in an unauthorized device and watch the port drop while the log records the attempt with a date, a port, and a M A C address.
At the application layer, we meet the logic that users and services perceive, and protections become precise to business intent. A Web Application Firewall (W A F) enforces syntactic and semantic rules for endpoints, while authentication binds users or clients to accounts, and rate limiting keeps resource use inside safe bounds. Logging captures request paths, identifiers, and decisions in a form that can be reconstructed for incident review without exposing secret values. Strong controls here are explicit: well-defined allow lists for methods and parameters, clear error handling that yields stable codes, and protections that degrade gracefully under load. The artifacts include configuration snapshots, sampled sanitized logs, and playback of tests that show the W A F and application logic agree about what is permitted.
Attacks love ambiguity, and mapping them to layers shrinks that ambiguity. Address Resolution Protocol poisoning is a data link problem with switch defenses and sensor evidence in gratuitous replies; Border Gateway Protocol route leaks are a network-layer failure solved with strict filtering and observed in control-plane logs; T L S downgrade attempts press the transport boundary and are blocked by version and cipher negotiation rules; serialization bombs live in presentation parsing and are defused by input limits; injection flaws manifest at the application layer and are mitigated by parameterization and canonicalization. The virtue of the model is clarity: each threat has a first-class home where it can be detected early and countered cheaply, even if additional guardrails exist above.
Monitoring spans every layer, and analysts rely on signals that correspond to where problems originate. Physical anomalies show up as unexpected link flaps, power alerts, or door-open events tied to device outages. Data link health is visible in interface error counters, port-security violations, and V L A N tagging inconsistencies. Network-layer detection uses flow records, route-change logs, and denied A C L counters to trace lateral movement or beaconing. At transport, handshake failures and certificate validation errors tell a story about impersonation or misconfiguration. Session and presentation issues surface as token verification failures and parser exceptions, while application signals include spikes in error codes, W A F rule hits, and unusual latency distributions. Good detection design teaches teams to read these signatures like a language.
A quick diagnostic test can build useful intuition. When an alert mentions excessive retransmissions and failed handshakes after a certificate change, the most precise starting point is the transport layer, confirming version and certificate validation before escalating up or down. If users report that only devices on a newly installed switch lose connectivity while others in the same floor work fine, aim first at the data link, checking V L A N assignments and port profiles. If a service suddenly starts sending secrets in clear text after a library update, the presentation and application layers deserve immediate scrutiny for serializer or header regressions. Practicing this triage language out loud is not busywork—it is how teams move quickly without creating new problems.