Category: Blog

  • CMDB Health & Governance

    CMDB Health & Governance

    Atrinet Tech Blog Series.
    ServiceNow CMDB Health & Governance
    (1 of 2)


     

    Situation

    You’ve got ServiceNow running. Discovery is humming. Incidents are flowing. Changes are happening. Everything looks operational.

    Then someone decommissions a server, and three business-critical services go dark because nobody knew about the dependencies. Or Security asks for a list of Windows 2012 servers, and the CMDB returns 847 records – 300 are duplicates, 150 are decommissioned but still active, and 200 haven’t been updated in 18 months.

    Your CMDB isn’t just messy. It’s actively lying to you.

    Common Approach

    Most teams treat CMDB health as a “we’ll clean it up later” problem.
    Run Discovery and assume the data is good. Manually spot-check CIs when something breaks. Assign someone to dedupe quarterly. Hope people update ownership during incidents.

    The mindset: Discovery populates the CMDB, so it must be accurate.

    Why It Breaks

    Discovery tells you what exists. It doesn’t tell you if it matters, who owns it, if it’s complete, if it’s correct, or if it’s compliant.

    Without active governance, your CMDB becomes a graveyard of stale, duplicate, and incomplete CIs. Teams stop trusting it. They build shadow spreadsheets. Your automation breaks. Change Management becomes guesswork.

    When something goes wrong, MTTR skyrockets because responders waste time validating whether the CMDB data is even real.

    What We Did Differently

    At Atrinet, we stopped treating CMDB health as a cleanup project and started treating it as continuous governance – baked into operations, not bolted on afterward.

    Instead of reacting to bad data, we leveraged ServiceNow’s full CMDB capabilities to build feedback loops that prevent it:

    • Completeness & Correctness: Enforce required attributes using native health rules. Detect duplicates, orphans, and stale CIs. Auto-remediate or escalate
    • Attestation & End-of-Life: Use Data Manager to ask CI owners regularly, “Does this exist? Do you own it?” Automate retire-archive-delete flows. 
    • Compliance & Reconciliation: Schedule certification cycles with CMDB Workspace policies. Define which data source wins when multiple sources conflict using CMDB 360.
    • Query Builder & Remediation Rules: Create saved queries for impact analysis. Trigger workflows when health issues are detected using out-of-box remediation capabilities.
    CMDB Health & Governance Cycle

    How It Works

    Start with 2–3 critical services. Define required attributes, authoritative sources, and ownership. Enable ServiceNow’s native health features: completeness scores, correctness checks, compliance audits, and reconciliation rules. Build remediation flows that auto-create tasks or escalate to owners. Track completeness %, correctness issues, and MTTR improvement. Expand once proven.

    Tradeoffs

    This isn’t free. Initial setup takes 2–4 weeks. CI owners need to participate – if your org treats data quality as someone else’s problem, you’ll need exec sponsorship. Health rules need ongoing tuning.

    But the alternative is worse: teams stop trusting the CMDB, build shadow systems, and you’re back to manual change impact analysis.

    When to Use It

    You need this if changes are failing due to bad CI data, MTTR is high because responders can’t trust dependency maps, audits find gaps, or teams are building Excel trackers.

    If you’re using ServiceNow for Change, Incident, or Asset Management, this isn’t optional.

    Here’s the truth nobody wants to say out loud: if you’re not actively governing your CMDB, you don’t have a CMDB – you have an expensive liability with a ServiceNow license.
    We’ve seen organizations spend six figures on Discovery tools, hire CMDB admins, and still operate like it’s 2005 with spreadsheets and war rooms.
    The problem isn’t the tool.
    It’s the delusion that data quality happens by accident. It doesn’t. You either commit to treating your CMDB like the critical infrastructure it is – with ownership, accountability, and automation – or you admit it’s decorative and stop pretending it drives decisions. There’s no middle ground.
    Half-maintained CMDBs are worse than no CMDB at all, because they give you false confidence right before everything breaks.

    Key Takeaway

    Discovery gives you data. Governance gives you trusted data. And trusted data is what lets you move fast without breaking things.

    In Post 2, we’ll deep-dive into Data Manager (Attestation, End-of-Life & Certification), Completeness, Correctness, and Compliance – with real configurations, and gotchas from implementations we’ve run.

    By: Sagie Ratzon
    ITOM Expert, ServiceNow Implementation Specialist
    LinkedIn

  • ServiceNow MID Server Best Practices

    ServiceNow MID Server Best Practices

    Atrinet Tech Blog Series.
    ServiceNow MID Server Best Practices


     

    If you work with ServiceNow Discovery or integrations long enough, you eventually hear the same sentence: “The MID Server is up, but nothing is working.”
    That moment usually triggers log hunting, credential resets, and finger-pointing. In reality, most MID Server failures are not random. They are the predictable result of how the MID was designed, owned, and operated.

    The MID Server is not just a job runner. It is the production bridge between your network and the ServiceNow platform. Treating it as anything less almost guarantees outages.

    Treat the MID Server as production infrastructure

    A MID Server needs the same discipline as any production system. Use a hardened OS baseline, define patching and upgrade policies, and apply antivirus exclusions that do not break Java or MID processes. Ownership must be explicit. Someone owns uptime, someone monitors health, and someone approves changes. Lifecycle planning matters too. You need an upgrade cadence and a rollback path. If nobody owns the MID, the MID will eventually own your delivery timeline.

    Right-size and isolate by purpose

    “One MID for everything” is a design smell. Discovery and integrations behave very differently and should not always share resources. High-volume integrations such as heavy API polling or event ingestion deserve dedicated MIDs. In segmented networks, deploy MIDs per zone instead of opening broad firewall rules. Isolation reduces blast radius and makes failures easier to reason about.

    Network access and credentials are design decisions

    Most “credential issues” are not credential problems. They are connectivity, DNS, or TLS problems in disguise. Before go-live, confirm required ports, proxy paths, DNS resolution consistency, NAT behavior, NTP sync, and TLS inspection policies. Credentials should follow a clear strategy. Use service accounts, least privilege, defined rotation, and match credential types to access methods such as WinRM, WMI, SSH, or APIs. When Windows Discovery coverage is low, the fix is often credential hygiene and WinRM readiness, not more scanning.

    ServiceNow MID SERVER

    Scale using pools, not single nodes

    Production environments should assume concurrency. Use multiple MIDs per capability and define selection rules so jobs distribute evenly. Monitor queue depth and execution time. A growing backlog is one of the earliest signals of trouble. A MID can appear healthy while work silently piles up behind it.

    Observe the MID like an application

    Heartbeat alone is meaningless. Track queue backlog, execution latency, error rates by pattern, and JVM heap trends over time. Certificate and TLS errors deserve special attention because they often appear after unrelated security changes. A MID that looks up can still be effectively down if it is drowning in backlog or stalled on downstream calls.

    Expect change to break the MID first

    Java updates, TLS policy hardening, proxy changes, firewall rule cleanup, and certificate rotation often break MIDs before anything else. Treat the MID as a canary. After any infrastructure or security change, run a simple smoke test that validates connectivity, execution, and data flow.

    Build integrations for failure, not hope

    For integrations that rely on MIDs, failure is normal. Implement retries with backoff, ensure idempotency to prevent duplicates, log correlation IDs for traceability, and maintain replay mechanisms for missed windows. This turns short outages into recoverable events instead of incidents.

    Document a first-response playbook

    When something fails, teams should know where to check logs first, how to validate connectivity quickly, and how to distinguish ServiceNow-side issues from source-side problems. This alone can cut mean time to recovery dramatically.

    Keep dev, test, and prod aligned

    Run the MID Servers for dev, test, and prod on the same host, with logical separation per instance. This minimizes environment drift. When network, DNS, Java, and certificates behave the same, testing becomes meaningful. If something breaks in prod, you debug configuration, not infrastructure.

    When all else fails, phone a friend who’s done this before

    If your MID Servers keep misbehaving, Discovery feels fragile, or every security change turns into a fire drill, it is usually not bad luck. It is architecture.  This is where experienced ServiceNow integrators earn their keep. Teams like Atrinet have seen these exact failure patterns many times, know where to look first, and fix root causes instead of symptoms. Sometimes the smartest MID Server optimization is knowing when to call someone who has already made the mistakes for you.

    MID Server failures are rarely mysterious. With ownership, isolation, observability, and resilience, they become predictable and preventable.

    By: Shira Avissar
    Full Stack & ServiceNow Developer
    LinkedIn

  • AI Account Phishing : 20 Million Logins Stolen (And Rising)

    AI Account Phishing : 20 Million Logins Stolen (And Rising)

    Telecom Security – Part 10 of 10 in the series.
    AI Account Phishing


     

    AI platforms have become essential work tools, handling everything from documents and analysis to code, prototypes, and sensitive conversations. Yet the accounts behind these platforms remain surprisingly unprotected. Attackers have already noticed. In the past two years, stolen credentials for ChatGPT and other AI services have appeared across dark-web markets in volumes nobody expected.

    Group-IB found over 225,000 compromised ChatGPT accounts traded between Jan and Oct 2023.
    Kaspersky reported a sharp rise in AI-service credential leaks in 2023,
    with about 664,000 OpenAI-related records exposed – a 33-fold increase in just one year.

    And by 2025, the scale became staggering,
    with reports claiming nearly 20 million OpenAI logins circulating on dark-web markets.

    These numbers confirm a simple truth. AI Account Phishing is already mainstream.

    Attackers are combining this stolen-credential pipeline with convincing phishing. . They impersonate AI platforms, replicate login flows, clone branding, and deliver carefully crafted links across email, messaging apps, SMS, and even search ads. A 2024 campaign documented by Barracuda used fake OpenAI billing notifications with realistic domains, varied URL paths, and valid TLS certificates

    The goal is simple. Get users to click a link and hand over their AI login or API key.

    AI Account Phishing Attempt
    AI Account Phishing Attempt

    How AI Account Phishing Works, and Why It’s Growing So Fast

    The attack always starts the same way, with a message claiming something urgent. Payment failed. API key unsafe. Access paused. A new model is waiting. Each lure prompts the user to click on a link that appears legitimate, often shortened or disguised behind redirects.

    The landing pages are nearly indistinguishable from real AI login portals. Domains differ by a character or use trusted-looking TLDs. Many pages have valid HTTPS and mimic the exact flow, design, and CSS of official platforms. Once credentials or API keys are entered, attackers harvest stored files, chat history, documents, and code fragments. They run expensive workloads or bundle the stolen account into a marketplace “log.”

    Three structural forces are accelerating this trend.
    AI accounts now hold high-value data – prompts, documents, project context, and API keys.
    Users trust the channels these platforms use – so fake alerts feel naturally credible.
    Users do not treat AI accounts like mission-critical assets – many people access AI tools on personal devices, where infostealer malware spreads easily. Research shows families like LummaC2, Raccoon, and RedLine are a major source of leaked AI credentials

    Phishing and malware feed each other. Malware steals existing accounts. Phishing steals fresh ones. Both circulate rapidly across dark-web markets

    Stopping the Attack Before the Login Page Loads

    Most AI Account Phishing depends on one thing.  A link.

    Whether delivered by email, chat, SMS, or a search ad, the attack requires the victim to click a URL. That URL is the earliest and most reliable point to break the attack.

    Modern detection focuses on how the domain behaves. Most phishing pages rely on newly registered lookalike domains, AI-themed TLDs, redirect chains hiding the final page, obfuscated parameters, shorteners like bit.ly or t.ly, or fast-rotating hosts with real TLS certificates.

    This is where Fortress URL Scanner DB comes in.

    Built for high-volume, real-time link inspection, it analyzes domains, redirects, and obfuscated URLs to identify dangerous behavior associated with AI Account Phishing. It catches lookalike AI login pages, domains impersonating AI brands, and malicious hosting patterns that change rapidly. It works across messaging channels, notifications, internal systems, and automated workflows.

    Fortress URL Scanner DB also maintains a continuously updated risk model for AI-related threat patterns, including:

    • domains containing brand-adjacent keywords
    • redirect behavior typical of AI-phishing kits
    • short-lived hosting and rapid domain churn
    • cross-channel delivery patterns common in AI-account lures

     

    The aim is clear – Stop the attack before users ever land on a fake login screen, even when the phishing infrastructure is brand new, short-lived, or built to look legitimate.

    AI Account Phishing iPhone Message

    How to Protect against AI Account Phishing

    A few practical habits make a significant difference.

    Never log in through links in emails, messages, or ads.
    Always check the exact domain in the address bar.
    Treat AI accounts like any SaaS platform that stores sensitive data.
    Rotate API keys regularly and avoid using the same key for multiple projects.
    Store minimal sensitive data in chat history or uploads.
    Block suspicious links before they reach users.
    Use systems that detect brand impersonation domains and aggressive redirect behavior.

    And if checking every link feels exhausting,
    Fortress URL Scanner DB is happy to lose sleep so you don’t have to.

    Conclusion

    AI platforms have quietly become part of everyday professional workflows. That makes their accounts part of the modern security perimeter. Attackers target these accounts because they expose data, cost money, and unlock valuable API capabilities. Since almost every attack begins with a link, the most effective defense is stopping that link before the page loads.
    Fortress URL Scanner DB intercepts these threats at their earliest point, helping neutralize AI Account Phishing before credentials or API keys can be taken.

    Want to protect your subscribers from link-based fraud across every channel?
    See how Fortress URL Scanner stops phishing links before the user even sees them!

  • TikTok Phishing Is Exploding Online

    TikTok Phishing Is Exploding Online

    Telecom Security – Part 9 of 10 in the series.
    Tiktok Phishing


     

    TikTok is the place where trends start, creators rise, and short videos become global movements in minutes. It is also a place where phishing attackers now operate at full speed. More than 1.6 billion people use TikTok every month, which makes it an irresistible target for fraud operations that rely on one thing above all else.
    A link.

    TikTok phishing can show up as a fake brand offer, a “you won” message, a misleading ad, or a comment on a viral video, but the structure is always similar. Build curiosity or trust, send a link, redirect the user away from TikTok, and steal something of value.

    The FTC reports more than $2.7 billion in social media fraud losses in just the past three years. UK consumer group Which? has repeatedly warned about TikTok-based impersonation scams.
    Security companies like ESET and Norton confirm that more than 70 percent of these attacks include a URL, often shortened or hidden.

    The content changes. The hook changes. The personalities change.
    The URL is the constant.

    Humans can’t moderate TikTok Phishing 

    TikTok moves faster than any other platform. Trends rise and collapse in hours. Comment sections explode within minutes. A malicious link can appear, go viral, and disappear before a human moderator even opens the dashboard.

    Shortened links hide the true destination. Redirect chains hide the landing page. Cloaking hides malicious behavior and shows reviewers a “clean” version of the site. TikTok can remove accounts, but in most cases, the removal happens after users report the scam, not before.

    This is the core problem. The platform sees the link only at the surface level.
    Everything harmful happens behind it.

    TikTok fake profile
    TikTok fake profile

    How TikTok Phishing Works,
    and Why the Link Is the Real Weapon

    TikTok phishing attacks are diverse, but they all rely on the same playbook.
    Build trust, then redirect the victim off the platform.

    Fake Brand Collaboration
    Creators receive a message from someone claiming to represent a well-known brand. The offer looks real, the brief sounds convincing, and the link looks harmless because it is shortened. The final page is a cloned login screen that captures credentials.

    Giveaway Impersonations
    Users are told they won a reward, a prize, or a brand bundle. The link leads to a fake verification form that requests personal or payment details. Which? has flagged these scams multiple times.

    Fake TikTok Ads
    Scammers pay for legitimate-looking ads. The landing page promotes a crypto opportunity or financial app. CNBC reports that these ad-based scams are growing, especially among younger users.

    Viral Trend Hijacks
    Malicious links are inserted into comment threads under high-traffic videos. Many follow three to seven redirects before revealing the real destination. Some activate only on mobile devices to evade review systems.

    Across all scenarios, TikTok is not the problem. The link is.

    Why TikTok Cannot See Link Risk Without Help

    A platform can detect fake accounts, keyword abuse, or suspicious activity patterns.
    It cannot detect what is behind a link unless it follows and analyzes it.

    Shortened URLs
    Bit.ly, tinyurl, t.co, and similar services make phishing links appear safe. Norton highlights how common this tactic is.

    Redirect Chains
    Attackers route users through several domains before showing the real phishing page. Moderation tools typically see only the first hop.

    Cloaking Tactics
    Fraudsters show harmless content to moderation systems, but malicious content to real users. Device based switching is now standard in phishing kits.

    Dynamic Changes
    A link can behave differently by time, region, or device. A domain that looks safe for reviewers may turn malicious later.

    No human team can keep up with this level of deception.
    The solution is visibility, and visibility requires the right technology.

    Fake TikTok Scam

    The Solution: Fortress URL Scanner DB

    Fortress URL Scanner DB gives platforms, security teams, and digital ecosystems the missing visibility they need. It turns every suspicious link into a fully analyzed, risk-scored object that a platform can act on instantly.

    Full Redirect Discovery
    Fortress expands every shortened link and follows every hop, even deep redirect chains. Platforms see the same final page the victim would see.

    Behavioral and Reputation Scoring
    Fortress evaluates domain age, DNS patterns, hosting infrastructure, global threat intelligence, and historical behavior. This creates a precise risk profile for every URL.

    Cloaking and Obfuscation Detection
    Fortress identifies encoded redirects, hidden elements, and conditional page behavior. This is the layer that stops scammers who try to fool review systems.

    Continuous Updating
    Phishing links evolve quickly. Fortress updates each profile as behavior changes. It uses intelligent analysis, automation, and selective AI components to scan at scale without slowing down user experiences.

    Easy Integration With Any Platform
    Fortress URL Scanner DB acts as a standalone product, and can be added to any platform!

    The result is simple. Platforms no longer operate blindly.
    They see the real destination, the real behavior, and the real risk behind every link.

    TikTok Phishing Can Be Stopped With Link Intelligence

    TikTok phishing is not slowing down. It is accelerating because attackers know that most platforms cannot see what happens behind a link. Manual moderation cannot keep pace with cloaking, redirect chains, and fast changing landing pages.

    Fortress URL Scanner DB gives platforms the missing visibility they need. It reveals every redirect, scores every URL, detects hidden behavior, and stops malicious links before users click them. Any environment that handles user generated links needs this level of protection.

    TikTok will continue to grow. Phishing will continue to follow. The only reliable defense is the ability to see the link for what it really is.

    Want to protect your subscribers from link-based fraud across every channel?
    See how Fortress URL Scanner stops phishing links before the user even sees them!

  • RCS Fraud: Richer Messaging, Richer Scams

    RCS Fraud: Richer Messaging, Richer Scams

    Telecom Security – Part 8 of 10 in the series.


     

    RCS was meant to be the next big leap in mobile messaging. With verified business accounts, interactive buttons, and rich media, it promised to replace plain text with true digital conversation. That promise is becoming real: RCS monthly active users passed 1.2 billion in 2024, up over 550 percent year-on-year.

    But this evolution has a cost – RCS Fraud.

    According to Juniper Research, RCS Business Messaging fraud is projected to cost mobile subscribers $4.3 billion globally within five years. As adoption spreads, so does the surface area for abuse. Telcos that treat RCS as “just a modern SMS” risk facing the same fraud problems, amplified by richer content and deeper trust.

    The hidden cost of “trusted” rich messaging

    What makes RCS appealing to brands also makes it powerful for fraudsters. The format allows verified business profiles, company logos, and embedded buttons. Users naturally trust those cues. In tests, messages carrying fake brand logos were clicked up to three times more often than classic SMS scams.

    RCS Fraud looks scarily good.

    Fraud actors now exploit this built-in credibility. A fake courier notification with a brand image and “Track Your Parcel” button can trigger instant engagement.
    When messaging looks official, users hesitate less and lose more.

    Example of RCS Chat with a Hotel Vendor

    How fraudsters exploit the new channel

    RCS Fraud is growing in sophistication, offering more vectors for deception:

    • Impersonation attacks.
      Criminals hijack or mimic verified business handles.

    • Malicious interactive content.
      Buttons, QR codes, or carousels redirect to credential-harvesting sites.

    • Bait-and-switch campaigns.
      Legitimate-looking notifications morph into payment requests or refund traps.

    • Cross-platform blending.
      Attackers link SMS, WhatsApp, and RCS in one sequence to appear continuous.

     

    A 2025 GLF report found that 35 percent of operators experienced higher messaging-fraud activity despite new filtering tools, a sign that detection frameworks built for the SMS era can’t read the new playbook.

    Why do legacy firewalls fall short?

    Most operator defenses were designed for simple text. Traditional firewalls rely on pattern rules, keyword filters, or blacklists. RCS Fraud, on the other hand, can hide behind structured metadata, branded assets, and conversational logic that don’t fit those patterns.

    RCS traffic travels through OTT environments where operators can’t fully inspect payloads. The result is that fraudulent messages can appear compliant while hiding behavioral anomalies invisible to static rule engines.

    To protect against RCS Fraud, operators need more than new rules – they need systems that can truly understand message behavior and context.

    The Fortress approach:
    Intelligent pattern recognition at scale

    Fortress Advanced Messaging Firewall was built for this new generation of threats. It continuously analyzes vast messaging flows, building detailed profiles of senders, routes, timing, and content structure to understand what “normal” traffic looks like.

    Each new message is evaluated in real time against this learned context. The firewall detects subtle irregularities—unusual sending bursts, timing mismatches, inconsistent templates, or suspicious delivery paths—that often precede fraud.

    Its classification engine blends advanced heuristics with AI-assisted pattern detection, refining itself through continuous network feedback. This data-driven intelligence allows the firewall to assess risk dynamically and block or quarantine questionable RCS sessions in under 50 milliseconds, even at carrier throughput.

    It’s not about scanning for known bad links, it’s about recognizing when something doesn’t belong.

    Securing the next chapter of messaging

    RCS is redefining how enterprises reach customers, but its credibility can also be its weakness. The same trust signals that power engagement can amplify deception if left unguarded.

    Operators who embed intelligent, data-rich firewalls today will be the ones who secure both revenue and reputation tomorrow. Fortress Firewall provides that foundation, a protection layer that reads patterns, learns behavior, and stops threats before they spread.

    “The networks that learn fastest will be the ones users trust longest.”

    Want to protect your subscribers from link-based fraud across every channel?
    See how Fortress URL Scanner stops phishing links before the user even sees them!

  • WhatsApp Fraud – Can Telecoms Help?

    WhatsApp Fraud – Can Telecoms Help?

    Telecom Security – Part of 10 in the series.


     

    In 2025, WhatsApp isn’t just where we talk – it’s where we trust.
    Friends, banks, deliveries, even government alerts – they all live in one chat feed.
    And that’s exactly why scammers love WhatsApp Fraud.

    WhatsApp removed 6.8 million accounts tied to scam operations in just the first half of 2025.
    That shows the scale of WhatsApp Fraud, but also the limitation: banning accounts doesn’t stop the real weapon of modern fraudsters: the link.

    End-to-end encryption keeps messages private, not safe.
    Fraudsters now exploit that privacy to send perfectly crafted phishing links that steal credentials, OTPs, and payment data.
    The moment you click, the crime begins.

    ⚠️ Anatomy of a WhatsApp Fraud

    Here’s how a typical attack unfolds:

    1️⃣ A new chat appears – someone posing as a bank, courier, or even a friend.

    “Hi, this is DHL Support. Please verify your delivery details: [bit.ly/DHL-Confirm].”

    2️⃣ The message looks legitimate – logo, tone, and urgency all seem right.
    3️⃣ The victim clicks the link, believing it’s official.
    4️⃣ The link silently redirects through several domains, landing on a fake banking or payment site.
    5️⃣ The user enters login credentials, card details, or an OTP.
    6️⃣ Within minutes, attackers use that data to steal money, hijack accounts, or even take over the victim’s WhatsApp itself.

    Cyber-safety experts warn that common WhatsApp scams rely on links or attachments requesting money, personal data, or verification – all red flags that still trick millions of users each year.

    The entire process takes less than a minute.
    No malware.  No exploit.  Just trust – weaponized.

    WhatsApp account termination in the European Union in 2024, by violation
    WhatsApp account termination in the European Union in 2024, by violation

    🧱 Why Traditional Defenses Fail

    Here’s the hard truth: no one is inspecting those links in time.

    • WhatsApp’s end-to-end encryption prevents operators or regulators from seeing message content.

    • Device antivirus tools rarely analyze shortened or dynamic URLs.

    • Most users assume WhatsApp = safe.

    • And by the time banks detect suspicious activity, the money’s already gone.

    Meanwhile, according to the Communications Fraud Control Association (CFCA), the telecom industry lost US $38.95 billion to fraud in 2023, up 12 % from 2021.
    The threat is no longer theoretical — it’s a global, growing financial drain.

    So if WhatsApp fraud happens after the click… where can protection even exist?

    Example of a WhatsApp Fraud – Gold WhatsApp Scam

    🛡️ Fortress Steps In – At the Click

    You can’t scan the message, but you can stop the click.

    When a user taps a link – in WhatsApp, Telegram, or SMS – the phone still needs to connect through the operator’s network.
    Before the browser loads any page, it performs DNS lookups, IP requests, and TLS handshakes.
    That’s where Fortress URL Scanner silently intervenes.

    How It Works:

    1. The user taps a link inside WhatsApp.

    2. The device requests that web address through the mobile network.

    3. Fortress intercepts and scans the URL at the telecom layer.

    4. It unshortens every redirect, checks domain reputation, and applies ML-based pattern detection.

    5. If malicious, Fortress blocks the request or redirects to a safe warning page:

      “⚠️ Suspicious link detected – this site may be trying to steal your information.”

    6. If safe, the browser loads instantly – no delay, no visible change.

    This protection is app-agnostic and privacy-safe.
    Fortress doesn’t see inside the message – it protects what happens after the tap.

    🌐 Operators, Wake Up!

    Telcos already own the infrastructure that every click passes through.
    With Fortress, that control becomes protection.

    ✅ Stop fraud before it reaches the endpoint
    ✅ Preserve customer trust in your brand
    ✅ Offer “Link-Protection-as-a-Service” to enterprise clients
    ✅ Turn network security into new recurring revenue

    In a messaging world that’s encrypted and decentralized, operators are the last mile of trust – the only ones who can protect users beyond the message itself.

    “Users may trust WhatsApp – but they pay you.
    That’s why security must start with the operator.”
    (Ohad Kamer, CMO & Co-Founder of Atrinet)

    🔒 Security Beyond the Message

    Phishing no longer lives in SMS alone.
    Links are everywhere – WhatsApp, Telegram, RCS, even email.

    By implementing URL scanning at the network layer, Fortress turns every click into a checkpoint – a place where WhatsApp fraud can be stopped silently, instantly, and privately.

    You don’t need to see the chat to protect the user.
    You just need to see the connection.

    Want to protect your subscribers from link-based fraud across every channel?
    See how Fortress URL Scanner stops phishing links before the page even loads.

  • AIT Fraud – When Bots Inflate Your Bills

    AIT Fraud – When Bots Inflate Your Bills

    Telecom Security – Part 6 of 10 in the series.


     

    In messaging, more traffic doesn’t always mean more success.
    Across today’s A2P ecosystem, billions of messages that appear legitimate are actually fake – generated by bots, scripts, or shady routing partners.
    The industry calls it Artificially Inflated Traffic (AIT), or more bluntly, SMS pumping.

    A2P SMS remains a trusted channel for authentication and alerts, yet fraudsters have found a way to turn that trust into a cash machine. One major social platform, Twitter (X), reportedly lost $60 million per year to inflated OTP messages. Multiply that across banks, delivery apps, and retailers, and the total damage easily reaches into the billions.

    What Exactly Is AIT — and Why It’s So Hard to See

    AIT attacks exploit automation.
    Bots repeatedly trigger real SMS flows — for example, by filling out sign-up or password-reset forms. Each request sends an OTP to numbers controlled by the attackers.

    Because the messages travel through normal routes and generate successful delivery receipts, nothing appears wrong. The enterprise pays for every message; the operator processes every one; the fraudster collects the profit.

    According to Soprano Design, AIT often hides inside genuine traffic patterns: small volume bursts across diverse prefixes, realistic delivery ratios, and authentic sender IDs. Traditional filters can’t flag these anomalies fast enough — they look too normal.

    AIT Flow, by Sinch
    How Does AIT Fraud Work? (img by sinch.com)

    The Real Damage – Revenue & Reputation

    AIT drains budgets quietly.
    Unexplained spikes on invoices, inflated KPIs, and confused support teams chasing “phantom” sign-ups are early warning signs.

    Beyond the direct cost, there’s data pollution. Juniper Research notes that A2P traffic exceeded 2.7 trillion messages in 2023, projected to surpass 3.5 trillion by 2025 – a perfect hiding ground for inflated traffic. Even if only 2 percent is fraudulent, that’s billions of fake messages.

    Operationally, AIT wastes bandwidth, skews analytics, and erodes customer trust. For operators and CPaaS providers, it threatens the credibility of SMS as a secure, reliable channel.

    Ohad Kamer, CMO & Co-Founder of Atrinet, said:

    “AIT is a silent killer of ROI in enterprise messaging.
    It doesn’t attack users, it attacks your margins.”

    Termination & SMS prices are increasing, while OTP demand is on the rise – creating the perfect storm for AIT and Fraudsters

    How Fortress FW Outsmarts AIT

    Defending against AIT isn’t just about blocking bad numbers;
    It’s about seeing patterns, understanding intent, and reacting in real time.

    Fortress can classify messages as OTP.

    By distinguishing verification messages from Marketing or Alerts, Fortress FW can apply focused monitoring on traffic flows, sender behavior, and routing paths in real-time.

    It sees the whole flow:

    • Sudden velocity spikes in OTP messages

    • Repeated requests from the same application IDs

    • Abnormal route changes or delivery-report loops

    • Geographic clusters that don’t match real user bases

    Sinch notes that AIT attacks can raise traffic volumes by 10–20X within minutes, overwhelming traditional monitoring tools. Fortress FW’s machine learning engine detects those anomalies early by analyzing historical baselines, message velocity, and destination diversity.

    Its AI behavioral engine continuously learns what “normal” traffic looks like for each client or tenant. When patterns deviate – whether by volume, timing, or distribution – Fortress FW flags, throttles, or blocks them instantly.

    This isn’t static filtering. It’s adaptive defense: a system that evolves as fraud tactics evolve.

    For decision makers, the value is clear:

    • Revenue protection: Stop paying for fake traffic before it’s billed.

    • Operational clarity: Clean, trustworthy analytics.

    • Customer trust: Real users, real conversions, zero disruption.

    The combination of data depth, classification accuracy, and adaptive AI gives Fortress FW a clear advantage:

    • Detects abnormal OTP or messaging traffic before it becomes a billing issue.
    • Learns your network’s normal behavior to minimize disruption.
    • Operates in real time, without adding latency or requiring routing changes.
    • Provides transparent reports so fraud and finance teams can act with confidence.

     

    The result is simple: no fake users, no wasted SMS spend, no erosion of trust.

    Don’t Let AIT Eat Your Margins

    AIT doesn’t steal passwords or data; it steals money, time, and trust.
    The danger lies in its invisibility – by the time finance notices, the damage is already invoiced.

    Fortress FW changes that.

    By analyzing traffic behavior in real time and learning each network’s unique signature, it identifies and neutralizes inflated traffic before it reaches your bill or your brand.

    “Fraudsters don’t need to break encryption;
    They just need your system to keep sending.”
     

    – Yoav Segman, Head of VAS & Security

    With Fortress, they can’t.

  • How Phishing Weaponizes Social Networks

    How Phishing Weaponizes Social Networks

    Telecom Security – Part 5 of 10 in the series.


     

    Social platforms now host a meaningful slice of global phishing activity – APWG counted ~1,003,924 unique phishing sites in Q1 2025, and the volume rose again in Q2!  That’s millions of risky links surfacing every month, and a growing number of them are delivered through feeds, DMs, and bio links.

    Phishing used to live mostly in messages and emails. Today, attackers treat social networks as first-class distribution channels: they post a short link in a Post, Story, DM an “urgent” notice, or hide a malicious landing behind a profile bio (Link-in-Bio). Researchers have observed campaigns that use legitimate platform redirects (TikTok, for example) as stealthy first hops to credential-phishing pages.

    At the same time, defenders are losing advantage. Security tooling that checks only the visible domain or uses static blocklists misses multi-hop redirect chains and links that are dynamically generated or executed by JavaScript. Cloudflare’s recent research shows attackers abusing link-wrapping and redirect services to mask malicious payloads and evade detection, turning previously protective controls into attack vectors

     

    Reported Phishing Attacks, 2Q2024-1Q2025

    Three technical and product weak points make social-platform phishing so effective:

    1. Short links & link-in-bio aggregators 
      Shorteners and bio-link services collapse complex URLs into tiny tokens users trust, and they’re repeatedly abused to hide final destinations. Shortener operators run abuse programs, but detection is reactive by nature, meaning it only happens AFTER users are hurt.

    2. Open redirects and platform redirects
      Legitimate redirect features (profile links, share UIs) can be abused as covert proxies that bounce victims through benign domains before landing on malicious pages. Cofense and others have documented TikTok/open-redirect abuse in credential attacks.

    3. Phishing-as-a-Service (PhaaS) & disposable domains
      Attack kits spin up hundreds of throwaway domains and templates in hours; Microsoft and partners recently disrupted an operation that used hundreds of domains to steal thousands of credentials, demonstrating how quickly attackers can scale.

    Because of these factors, surface-level scanning (check-first-hop, static lists) and slow manual review are insufficient — crowds and speed win for attackers.

    Platforms must stop treating links as incidental UX items and start treating them as security objects. Practical, deployable capabilities: 

    • Full redirect-chain resolution (in a real browser): Expand every shared URL (shorteners, bio links, platform redirects), execute JavaScript and follow meta-refresh so the canonical landing is revealed before the link is broadly visible.

    • Fast dynamic sandboxing of final landings: Short automated runs that detect credential forms, invisible iframes, unusual network calls or drive-by payloads — seconds, not hours.

    • Heuristic + reputation scoring of chains: features such as chain length, final-domain age, WHOIS privacy, hosting patterns and known PhaaS indicators create a reliable risk signal.

    • Behavioral signal fusion: correlate URL risk with account telemetry (DM/post velocity, newly created accounts posting links, geo/device anomalies) to spot compromised accounts and coordinated campaigns.

    • Graduated mitigation workflows: interstitial warnings for medium risk, automatic throttling + verification for high risk, and automated takedown requests for confirmed malicious infrastructure.

    These are the exact capabilities that stop high-reach campaigns before they go viral — and they’re the core strengths you should expect from a world-class URL database and scanner.

    Most Targeted Sectors 2025

    Fortress Database and URL Scanner powers link security at scale. Here’s what it does best:

    • Real-time chain resolution & canonicalization
      Expand every shared URL in a real browser (execute JS, follow meta-refresh) and store the canonical final landing — so you block the destination, not the distracting first hop.

    • Lightning-fast sandboxing & verdicts
      Run short dynamic sandboxes and return low-latency decisions at scale, so you stop threats without breaking user experience.

    • Hybrid risk engine + reputation graph
      Combine ML, User+AI-Created Rules, and threat feeds with rich signals (domain age, registrar patterns, known PhaaS markers) to deliver one simple, actionable risk score.

    • Signal fusion & open APIs
      Fuse URL risk with account and traffic telemetry (DM/post velocity, geo/device anomalies) via easy APIs, so you spot compromised accounts and coordinated campaigns with high confidence.

    • Automated remediation & takedown orchestration
      Push IOCs automatically to shortener vendors, registrars and partners, trigger takedowns and remove reusable malicious infrastructure fast.

    • Enterprise ops, low friction & compliance
      Ship with dashboards, SOC hooks, allow-lists and governance controls, plus flexible deployment (cloud/on-prem/hybrid) and onboarding — built to minimize false positives and meet enterprise requirements.

    Social-platform phishing is not an edge case — it’s a high-volume, high-trust attack surface. Platforms that expand, analyze and score links in real time – can stop the majority of viral phishing before it harms users or brands.

    If you manage product or security for a social app, aggregator or CPaaS, start by treating every shared URL as a security event: expand it, sandbox it, score it, and automate remediation.

    Or, just use Fortress DB & URL Scanner. Brilliantly complicated, beautifully simple.

  • $4.5M Fines, Class Actions, Churn – The High Cost of Weak URL Database

    $4.5M Fines, Class Actions, Churn – The High Cost of Weak URL Database

    Telecom Security – Part 4 of 10 in the series.


     

    Communications Platform as a Service (CPaaS) providers like Twilio and Telnyx sit at the heart of enterprise messaging. Recently, regulators, courts, and customers have all sent a clear message: if malicious traffic slips through, the platform pays the price.

    Recent headlines prove the stakes:

    T-Mobile now enforces Severity-0 non-compliance fines for messaging violations like phishing, smishing, or social-engineering. Tier 1 violations can cost up to $2,000 per incident

    In 2024, Twilio faced a TCPA class-action lawsuit over traffic enabled by its platform.

    Carriers like T-Mobile enforce up to $10,000 fines per incident for phishing or smishing traffic.

    Twilio confirmed an SMS phishing attack (smishing) where employees were tricked via fake IT messages, allowing attackers to access customer data.

    These are not isolated cases. They’re signals that weak URL defenses are costing CPaaS platforms millions in fines, lawsuits, and customer churn.

    Atrinet Fortress URL DB, Short URL Expansion
    Atrinet Fortress URL Scanner inspects the entire Redirect Chain!

    Every CPaaS provider talks about “fraud detection,” but the truth is simple:
    If the URL database behind your scanner is weak, everything else fails.

    Most failures happen because databases:

    • Aren’t updated in real time.

    • Don’t use AI or ML to catch zero-day phishing domains.

    • Lack Google Web Risk integration, missing global threat visibility.

    • Worst of all, can’t scan shortened links like bit.ly, tinyurl, or t.co.

    That last gap – shortened links – is the most dangerous. Attackers hide behind redirects to mask phishing domains. Without full expansion and analysis, a malicious link looks harmless until it’s too late. For CPaaS security, ignoring shortened links is like locking the front door but leaving the window wide open.

    To protect platforms, enterprises, and end-users, the URL DB must evolve.
    The gold standard for CPaaS security |& URL Scanners now includes:

    • Real-time updates – phishing domains emerge by the second.

    • AI + heuristics – catching obfuscation and zero-day phishing tricks.

    • Google Web Risk – leveraging one of the largest threat intelligence feeds in the world.

    • Ultra-low latency – sub-millisecond lookups to keep SMS and API traffic instant.

    • Shortened link resolution – expanding and inspecting every redirect chain to expose the true destination.

    Without this, CPaaS providers will keep paying fines, losing customers, and seeing their brand names dragged into headlines.

    This is exactly why we built Atrinet Fortress URL Scanner DB.

    CapabilityWeak / Legacy URL DBAtrinet Fortress DB
    Database freshnessUpdates slowly,
    missing new phishing domains
    Real-time updates,
    catching threats as they emerge
    AI/ML detectionBasic or rule-based onlyAI + heuristics, detecting zero-day phishing patterns
    Google Web Risk integrationAbsent or partialFull integration with Google Web Risk
    LatencyHigh lookup delays,
    slows messaging
    Ultra-low latency (<1 ms)
    for seamless traffic
    Shortened link supportCannot expand bit.ly, tinyurl, t.co or other redirectsFull shortened link scanning,
    resolving every redirect chain
    Deployment flexibilityCloud-only, limited integrationCloud or on-prem,
    for best performance

    With Fortress DB, CPaaS providers don’t just check a compliance box – they eliminate the very vector attackers rely on most.

    The message is clear: fines, lawsuits, and churn aren’t “possible.” They’re already happening.

    The difference between CPaaS companies that thrive and those that fall behind will be the strength of their URL database. A DB that can’t handle shortened links or real-time threats is a liability.

    Atrinet Fortress DB is the answer.

    It’s real-time, AI-powered, Google Web Risk-integrated, and built to neutralize shortened links before they damage your platform, your customers, and your reputation.

  • AI vs AI – Stopping AI Phishing

    AI vs AI – Stopping AI Phishing

    Telecom Security – Part 3 of 10 in the series.


     

    Congrats, AI Phishing attacks are a thing now, and they’re here to stay.  Phishing has been around for decades, but artificial intelligence is rewriting the rules. LLMs can now craft messages so convincing that even trained eyes struggle to spot the difference between real and fake.

    In a controlled experiment, IBM found that AI-generated phishing messages achieved an 11% click rate, compared to 14% for human-written ones — a near-identical success rate that proves how far the technology has advanced (ABA Banking Journal, Axios). Other research has found even higher engagement, with over 50% of recipients clicking AI-generated Spear Phishing messages in certain test conditions (ACFE).

    The old warning signs like awkward grammar, strange tone, and misspelled URLs are fading fast. LLMs can flawlessly mimic a brand’s tone, local slang, and even the writing style of a specific individual.
    The takeaway is clear: criminals can now rely on machines to produce phishing campaigns that are highly effective, scalable, and convincing.

    According to Harvard Business Review, AI dramatically reduces the cost and effort needed for spear phishing. What once took hours for a skilled attacker to produce can now be generated in seconds, making hyper-personalized spear phishing as easy to launch as generic spam.

    This means AI-powered phishing is not only more convincing, but it is also more accessible to less sophisticated attackers.

    Lower barriers to entry, lower costs, and higher effectiveness create the perfect storm: more attacks, more often, with higher success rates.

    AI generated Phishing
    According to IBM, Click to see the full research

     

    The only way to win? Fight AI with AI.

    AI phishing constantly mutates by changing words, sentence structure, and even visual layouts to bypass both human suspicion and automated filters. The familiar red flags that used to give scams away are disappearing.

    Whether the bait is a fake invoice, a missed delivery alert, or a forged CEO email, the endgame is almost always the same: Get the victim to click a malicious URL.

    That click is the choke point where these attacks can be stopped, if you can catch the threat in time.

    Atrinet Fortress FW was built for this exact moment.
    It doesn’t just check URLs against static lists; it combines the Google Web Risk database, which is updated in real time, with Fortress AI threat intelligence to catch what others miss.

    • Real-Time, Cloud-Scale Protection – Every link is scanned before the user can click it.

    • AI-Driven Threat Hunting – Fortress analyzes live traffic patterns across your network to spot malicious trends early.

    • Zero-Day URL Detection – Even brand-new, never-before-seen phishing links are identified and categorized in real time.

    • Automated Defense Rules – Fortress can instantly generate and deploy rules to block similar threats in the future.

    • Telco-Grade Performance – Handles massive volumes at carrier speed, stopping AI-powered phishing before it ever reaches the target.

    In short, Fortress fights AI with AI, turning the attackers’ own advantage into their biggest weakness.

    AI-powered phishing is not a future threat; it is happening now at an unprecedented scale.

    Enterprises, telecoms, and regulators cannot afford to rely on outdated defenses. The only way to protect users from the next click is to match the attackers’ speed, scale, and intelligence.

    With Fortress, you don’t just keep up –  you stay one step ahead.