US20210090816A1 - System and method for email account takeover detection and remediation utilizing ai models - Google Patents
System and method for email account takeover detection and remediation utilizing ai models Download PDFInfo
- Publication number
- US20210090816A1 US20210090816A1 US16/949,863 US202016949863A US2021090816A1 US 20210090816 A1 US20210090816 A1 US 20210090816A1 US 202016949863 A US202016949863 A US 202016949863A US 2021090816 A1 US2021090816 A1 US 2021090816A1
- Authority
- US
- United States
- Prior art keywords
- login
- login attempts
- model
- user
- attempts
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 50
- 238000000034 method Methods 0.000 title claims description 33
- 238000005067 remediation Methods 0.000 title description 7
- 238000010801 machine learning Methods 0.000 claims abstract description 53
- 238000012549 training Methods 0.000 claims abstract description 15
- 238000013473 artificial intelligence Methods 0.000 claims description 27
- 238000004891 communication Methods 0.000 claims description 23
- 230000008569 process Effects 0.000 claims description 14
- 230000000694 effects Effects 0.000 claims description 11
- 238000005507 spraying Methods 0.000 claims description 10
- 238000012544 monitoring process Methods 0.000 claims description 5
- 230000003190 augmentative effect Effects 0.000 claims description 3
- 230000002596 correlated effect Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims 1
- 238000013459 approach Methods 0.000 abstract description 11
- 230000001010 compromised effect Effects 0.000 description 16
- 230000000670 limiting effect Effects 0.000 description 11
- 230000008520 organization Effects 0.000 description 7
- 238000004590 computer program Methods 0.000 description 6
- 230000006399 behavior Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000015654 memory Effects 0.000 description 5
- 230000002547 anomalous effect Effects 0.000 description 4
- 238000003058 natural language processing Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 230000000875 corresponding effect Effects 0.000 description 2
- 238000002790 cross-validation Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 241000700605 Viruses Species 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01G—CAPACITORS; CAPACITORS, RECTIFIERS, DETECTORS, SWITCHING DEVICES, LIGHT-SENSITIVE OR TEMPERATURE-SENSITIVE DEVICES OF THE ELECTROLYTIC TYPE
- H01G9/00—Electrolytic capacitors, rectifiers, detectors, switching devices, light-sensitive or temperature-sensitive devices; Processes of their manufacture
- H01G9/20—Light-sensitive devices
- H01G9/2004—Light-sensitive devices characterised by the electrolyte, e.g. comprising an organic electrolyte
- H01G9/2018—Light-sensitive devices characterised by the electrolyte, e.g. comprising an organic electrolyte characterised by the ionic charge transport species, e.g. redox shuttles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1425—Traffic logging, e.g. anomaly detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1416—Event detection, e.g. attack signature detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01G—CAPACITORS; CAPACITORS, RECTIFIERS, DETECTORS, SWITCHING DEVICES, LIGHT-SENSITIVE OR TEMPERATURE-SENSITIVE DEVICES OF THE ELECTROLYTIC TYPE
- H01G9/00—Electrolytic capacitors, rectifiers, detectors, switching devices, light-sensitive or temperature-sensitive devices; Processes of their manufacture
- H01G9/20—Light-sensitive devices
- H01G9/2059—Light-sensitive devices comprising an organic dye as the active light absorbing material, e.g. adsorbed on an electrode or dissolved in solution
-
- H01L51/0052—
-
- H01L51/0053—
-
- H01L51/006—
-
- H01L51/0065—
-
- H01L51/0067—
-
- H01L51/0068—
-
- H01L51/0072—
-
- H01L51/0074—
-
- H01L51/4273—
-
- H01L51/442—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/21—Monitoring or handling of messages
- H04L51/212—Monitoring or handling of messages using filtering or selective blocking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/42—Mailbox-related aspects, e.g. synchronisation of mailboxes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1441—Countermeasures against malicious traffic
- H04L63/1466—Active attacks involving interception, injection, modification, spoofing of data unit addresses, e.g. hijacking, packet injection or TCP sequence number attacks
-
- H01L2251/303—
-
- H01L2251/308—
-
- H01L51/4226—
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02E—REDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
- Y02E10/00—Energy generation through renewable energy sources
- Y02E10/50—Photovoltaic [PV] energy
- Y02E10/542—Dye sensitized solar cells
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02E—REDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
- Y02E10/00—Energy generation through renewable energy sources
- Y02E10/50—Photovoltaic [PV] energy
- Y02E10/549—Organic PV cells
Definitions
- Cyber criminals are increasingly utilizing social engineering and deception to successfully conduct wire fraud and extract sensitive information from their targets.
- Spear phishing also known as Business Email Compromise, is a cyber fraud where the attacker impersonates an employee and/or a system of the company by sending emails from a known or trusted sender in order to induce targeted individuals to wire money or reveal confidential information, is rapidly becoming the most devastating new cybersecurity threat.
- the attackers frequently embed personalized information in their electronic messages including names, emails, and signatures of individuals within a protected network to obtain funds, credentials, wire transfers and other sensitive information.
- Countless organizations and individuals have fallen prey, sending wire transfers and sensitive customer and employee information to attackers impersonating, e.g., their CEO, boss, or trusted colleagues.
- impersonation attacks do not always have to impersonate individuals, they can also impersonate a system or component that can send or receive electronic messages.
- a networked printer on a company's internal network has been used by the so-called printer repo scam to initiate impersonation attacks against individuals of the company.
- ATO email account takeover
- an ATO can be detected through monitoring organizations' internal email traffic and classifying malicious emails sent from one employee to another within the same organization.
- An ATO can also be detected using statistical behavior features extracted from graph topology including but not limited to, success out-degree proportion, reverse page-rank, recipient clustering coefficient and legitimate recipient proportion.
- Another method of login-based detections is to monitor the login attempts for employees and building models to find suspicious activity such as irregular login locations, times, devices, or browsers.
- FIG. 1 depicts an example of a system diagram to support email account takeover detection and remediation in accordance with some embodiments.
- FIG. 2 depicts a flowchart of an example of a process to support email account takeover detection and remediation in accordance with some embodiments.
- FIG. 3 depicts an example of a system diagram to support user login-based ATO detection via privacy-preserving machine learning in accordance with some embodiments.
- FIG. 4 depicts a flowchart of an example of a process to support user login-based ATO attack detection and monitoring via privacy-preserving machine learning in accordance with some embodiments.
- AI artificial intelligence
- the AI engine is configured to continuously monitor behaviors and identify communication patterns of an individual user on an electronic messaging system/communication platform of an entity/organization via application programming interface (API) call(s) to the electronic messaging system. Based on the identified communication patterns, the AI engine is configured to collect and utilize a variety of features and/or signals from an email sent from an internal email account of the entity, including but not limited to identities/identifications of the sender and recipients of the email, forwarding rules and IP logins to the email account, information about links embedded in the email as a function of how likely the links are to appear in the entity.
- API application programming interface
- the AI engine combines these signals to automatically detect whether the email account has been compromised by an external attacker and alert the individual user of the account and/or a system administrator accordingly in real time.
- the AI engine enables the parties to remediate the effects of the compromised email account by performing one or more of: searching for all malicious emails sent from the compromised email account, deleting or quarantining such emails from mailboxes of their recipients, notifying the recipients of the emails, and remediating any mailbox rules that the attacker may have setup on the compromised email account.
- the proposed approach is capable of collecting and examining internal as well as external electronic messages exchanged with parties outside of the entity to identify communication patterns of the email account of the user within the entity.
- the proposed approach is further capable of detecting anomalous activities and email account takeover attempts in real-time, not offline or in hindsight, and allowing the user and/or administrator of the email account to promptly remediate the adverse effects of the compromised account.
- the term “user” refers not only to a person or human being, but also to a system or component that is configured to send and receive electronic messages and is thus also subject to an email account takeover attack.
- system or component can be but is not limited to a web-based application used by individuals of the entity.
- FIG. 1 depicts an example of a system diagram 100 to support email account takeover detection and remediation.
- the diagrams depict components as functionally separate, such depiction is merely for illustrative purposes. It will be apparent that the components portrayed in this figure can be arbitrarily combined or divided into separate software, firmware and/or hardware components. Furthermore, it will also be apparent that such components, regardless of how they are combined or divided, can execute on the same host or multiple hosts, and wherein the multiple hosts can be connected by one or more networks.
- the system 100 includes at least an AI engine/classifier 104 having a message and analysis component 106 and a fraud detection component 108 , and a plurality of databases including but not limited to a natural language processing (NLP) database 110 , a reputable domain database 112 , and a domain popularity database 114 , each running on one or more computing unit/appliance/hosts/server 102 with software instructions stored in a storage unit such as a non-volatile memory (also referred to as secondary memory) of the computing unit for practicing one or more processes.
- NLP natural language processing
- the software instructions When the software instructions are executed, at least a subset of the software instructions is loaded into memory (also referred to as primary memory) by one of the computing units of the host 102 , which becomes a special purposed one for practicing the processes.
- the processes may also be at least partially embodied in the host 102 into which computer program code is loaded and/or executed, such that, the host becomes a special purpose computing unit for practicing the processes.
- the computer program code segments configure the computing unit to create specific logic circuits.
- each host 102 can be a computing device, a communication device, a storage device, or any computing device capable of running a software component.
- a computing device can be but is not limited to a laptop PC, a desktop PC, a tablet PC, or an x86 or ARM-based a server running Linux or other operating systems.
- the electronic messaging system 116 can be but is not limited to, Office365/Outlook, Slack, LinkedIn, Facebook, Gmail, Skype, Google Hangouts, Salesforce, Zendesk, Twilio, or any communication platform capable of providing electronic messaging services to (e.g., send, receive, and/or archive electronic messages) to users within the entity 118 .
- the electronic messaging system 116 can be hosted either on email servers (not shown) associated with the entity 118 or on services/servers provided by a third party.
- the servers are either located locally with the entity 118 or in a cloud over the Internet.
- the electronic messages being exchanged on the electronic messaging system 116 include but are not limited to emails, instant messages, short messages, text messages, phone call transcripts, and social media posts, etc.
- the host 102 has a communication interface (not shown), which enables the AI engine 104 and/or the databases 110 , 112 , and 114 running on the host 102 to communicate with electronic messaging system 116 and client devices (not shown) associated with users within an entity/organization/company 118 following certain communication protocols, such as TCP/IP, http, https, ftp, and sftp protocols, over one or more communication networks (not shown).
- the communication networks can be but are not limited to, internet, intranet, wide area network (WAN), local area network (LAN), wireless network, Bluetooth, WiFi, and mobile communication network.
- WAN wide area network
- LAN local area network
- wireless network Bluetooth
- WiFi WiFi
- the client devices are utilized by the users within the entity 118 to interact with (e.g., send or receive electronic messages to and from) the electronic messaging system 116 , wherein the client devices reside either locally or remotely (e.g., in a cloud) from the host 102 .
- the client devices can be but are not limited to, mobile/hand-held devices such as tablets, iPhones, iPads, Google's Android devices, and/or other types of mobile communication devices, PCs, such as laptop PCs and desktop PCs, and server machines.
- the AI engine 104 runs continuously on the host 102 . As soon as one or more new/incoming messages or emails have been sent internally by one user within the entity 114 from an email account on the electronic messaging system 116 to another user within the entity 114 , the message collection and analysis component 106 of the AI engine 104 is configured to collect such new electronic messages sent as well as any new login attempt and/or any new mailbox rule change to the email account in real time. In some embodiments, the message collection and analysis component 106 is configured to collect the electronic messages before the intended recipients of the electronic messages in the entity 118 receive it.
- the AI engine 104 is optionally authorized by the entity/organization 118 via online authentication protocol (OATH) to access the more electronic messaging system 116 used by the users of the entity 118 to exchange electronic messages.
- the message collection and analysis component 106 is configured to retrieve the electronic messages automatically via programmable calls to one or more Application Programming Interfaces (APIs) to the electronic communication system 116 .
- APIs Application Programming Interfaces
- Such automatic retrieval of electronic messages eliminates the need for manual input of data as required when, for a non-limiting example, scanning outgoing emails in relation to data leak prevention (“DLP”) configured to scan and identify leakage or loss of data.
- DLP data leak prevention
- the message collection and analysis component 106 is configured to retrieve not only external electronic messages exchanged between the users of the entity 118 and individual users outside of the entity 118 , but also internal electronic messages exchanged between users within the entity 118 , which expands the scope of communication fraud detection to cover the scenario where security of one user within the entity 118 has been compromised during, for a non-limiting example, an email account takeover attack.
- the message collection and analysis component 106 is configured to identify communication patterns of each user based on collected electronic messages sent or received by the user on the electronic messaging system 116 over a certain period time, e.g., day, month, year, or since beginning of use.
- the electronic messages collected over a shorter or more recent time period may be used to identify a recent communication patterns of the user while the electronic messages collected over a longer period of time can be used to identify more reliable longer term communication patterns.
- the message collection and analysis component 106 is configured to collect the electronic messages from an electronic messaging server (e.g., an on-premises Exchange server) by using an installed email agent on the electronic messaging server or adopting a journaling rule (e.g., Bcc all emails) to retrieve the electronic messages from the electronic messaging server (or to block the electronic messages at a gateway).
- an electronic messaging server e.g., an on-premises Exchange server
- a journaling rule e.g., Bcc all emails
- the message collection and analysis component 106 is configured to use the unique communication patterns identified to examine and extract various features or signals from the collected electronic messages for email account takeover detection.
- the electronic messages are examined for one or more of names or identifications of sender and recipient(s), email addresses and/or domains of the sender and the recipient(s), timestamp, and metadata of the electronic messages, forwarding rules and IP logins to the email account, information about links embedded in the emails as a function of how likely the links are to appear in the entity 118 .
- the message collection and analysis component 106 is further configured to examine content of the electronic messages to extract sensitive information (e.g., legal, financial, position of the user within the entity 118 , etc.)
- the fraud detection component 108 is configured to first clean up content of the email sent from the email account by removing any headers, signatures, salutations, disclaimers, etc. from the mail. The fraud detection component 108 is then configured to utilize one or more of the following features and/or criteria that are unique to the email account to make a determination of whether the email account has been compromised (e.g., taken over by an attacker) or not:
- the NLP database 110 is configured to maintain a score for each word wherein the score represents the likelihood of the word to be associated with malicious (phishing) emails.
- the fraud detection component 108 is configured to compute term frequency-inverse document frequency (TF-IDF) of each word offline based on a corpus of labeled malicious emails and a corpus of innocent emails to determine the likelihood of the word being malicious.
- TF-IDF term frequency-inverse document frequency
- the reputable domain database 112 is configured to store the likelihood of domains being legitimate for the entity 118 .
- the reputable domain database 112 includes domains that have been seen by the message collection and analysis component 106 in internal communications more than a certain number of times over a certain period of time (e.g., the last few days). If a certain domain has been seen in internal communications often during a short period of time, it is deemed to be legitimate as it is unlikely to be associated with a phishing link even if the domain a relatively unpopular domain.
- the domain popularity database 114 is configured to maintain statistics on popularity of domains of the electronic messages across the internet. The less popular a domain in the electronic messages is, the more likely the domain is to be a phishing link.
- the fraud detection component 108 is configured to detect anomalous signals/features in attributes, metadata and/or content of the retrieved electronic messages for email account takeover detection.
- the anomalous signals include but are not limited to, a same sender using another email address for the first time, replying to someone else in the email/electronic message chain, or sudden change in number of recipients of an electronic message.
- the fraud detection component 108 of the AI engine 104 is configured to detect the fraudulent incoming messages that are part of a longer conversation that includes more than one electronic message, e.g., a chain of emails. Rather than simply examining the first message of the conversation, the fraud detection component 108 is configured to monitor all electronic messages in the conversation continuously in real time and will flag an electronic message in the conversation for block or quarantine at any point once a predetermined set of anomalous signals are detected.
- the fraud detection component 108 is configured to determine with a high degree of accuracy whether the email account is compromised by an email account takeover attack or other kinds of communication fraud and/or former/ongoing network threats, which include but are not limited to a personalized phishing attempt which entices the recipient to click on a link which may ask them to enter their credentials or download a virus, or an attacker hijacking an internal account and using it to communicate with other users in the organization or external parties.
- the fraud detection component 108 determines that the email account has been compromised, it is configured to block (remove, delete, modify) or quarantine electronic messages sent from the compromised email account in real time, and automatically notify the user, intended recipient(s) of the electronic message and/or an administrator of the electronic communication system 116 of the email account takeover attack.
- the fraud detection component 108 enables the notified parties to remediate the email account takeover incident by allowing them to search for any malicious emails sent from the compromised email account, delete or quarantine such emails from mailboxes of their recipients, notify the recipients of those emails, and delete and/or reset any malicious mailbox rules, e.g., inbox forwarding rules, which the attacker may have setup on the compromised email account.
- FIG. 2 depicts a flowchart 200 of an example of a process to support email account takeover detection and remediation.
- FIG. 2 depicts functional steps in a particular order for purposes of illustration, the processes are not limited to any particular order or arrangement of steps.
- One skilled in the relevant art will appreciate that the various steps portrayed in this figure could be omitted, rearranged, combined and/or adapted in various ways.
- the flowchart 200 starts at block 202 , where an internal electronic message sent from an email account of a user in an entity to another user in the entity automatically is collected in real time via an application programming interface (API) call to an electronic messaging system of the entity.
- API application programming interface
- the flowchart 200 continues to block 204 , where the collected electronic message is analyzed to extract a plurality of features and/or signals from the electronic message to determine if it is malicious for email account takeover detection.
- the flowchart 200 continues to block 206 , where the email account is determined with a high degree of accuracy as whether it has been compromised by an email account takeover attack based on the detected features and/or signals in the email.
- the flowchart 200 continues to block 208 , where electronic messages sent from the email account are searched, blocked, and quarantined in real time if it is determined that the email account has been compromised by the email account takeover attack.
- the flowchart 200 ends at block 210 , where a user, one or more intended recipients of the electronic messages and/or an administrator of the electronic messaging system are notified of the email account takeover attack and are enabled to take one or more remediating actions in response to the email account takeover attack.
- a new privacy-preserving machine learning (ML) mechanism to user login-based account takeover (ATO) detection is proposed, which is configured to detect ATO attacks based on login attempts by users.
- the proposed approach relies on assessing fraudulence confidence level of login IP addresses to classify the login attempts by the users.
- a plurality of attributes/features in one or more user login data logs are extracted and used to build a labeled dataset for training a machine learning (ML) model that relies on statistics of the login attempts to classify and detect fraudulent logins.
- the plurality of features include but are not limited to one or more of the IP address and/or the user of a login, a field that provides information about a device and/or a browser used in the login.
- These attributes make it possible to ascertain if a login attempt or instance by a user is suspicious based on the ML model. For a non-limiting example, if the IP address is used in a considerable number of failed login attempts and very few successful login attempts, the IP address tends to be a suspicious IP. Additionally, if the IP address shows up across multiple entities/companies with the same behavior, it is a confirmation that the IP address is illegitimate. Furthermore, if information of the browser used for the login is suspicious, it is another confirmation that the login attempt is an ATO attack activity rather than a random activity of a legitimate user.
- the ML model is trained using anonymized user login data to preserve privacy of the users.
- any drop in the ML model's accuracy in terms of detecting the ATO attacks as a consequence of the data anonymization is measured.
- a proper level of data anonymization can be determined based on the ML model's accuracy in detecting the ATO attacks when trained with different versions of the anonymized data.
- the proposed approach is able to achieve a balance between accuracy of ATO attack detection while preserving user data privacy.
- the proposed approach is also applicable to train the ML model using incomplete (instead of anonymized) user login data for ATO attack detection.
- FIG. 3 depicts an example of a system diagram 300 to support user login-based ATO detection via privacy-preserving machine learning.
- the AI engine 104 further includes a data anonymization component 107 configured to anonymize data used to train various AI models as discussed in details below.
- the message collection and analysis component 106 of the AI engine 104 is configured to collect data from the electronic messaging system 116 including all login attempts to accounts of users in an entity on the electronic messaging system 116 .
- the data can be collected from Microsoft Office 365 Azure Active Directory, wherein the data includes all login attempts to Office 365 accounts.
- the collected data comprises a plurality of (e.g., up to 20) fields/attributes that provide information about one or more of entity/company, user name, date, device, browser, authentication, and IP address of each login attempt.
- personally identifying information (PII) portion of the collected data including names and emails of the users are suppressed by the data anonymization component 107 and only a subset (e.g., 6) of the attributes are utilized by the message collection and analysis component 106 to form a set of possible quasi-identifiers.
- the quasi-identifiers can be but are not limited to the IP address that each user or employee logged-in from, the entity/organization that the user belongs to, the country where each login attempt is originated, time of the login attempt, the operation result that indicates whether the login attempt was successful or not, and other extended attributes (e.g., Extended Properties) that provide information about the browser, device, and authentication used by the login attempt.
- one or more additional attributes may be designated as the sensitive attributes.
- the message collection and analysis component 106 of the AI engine 104 is configured to create an IP reputation model, wherein the IP reputation model relies on a set of features extracted from the collected login attempts to determine reputations of the IP addresses of the login attempts. The reputations of the IP addresses are then utilized by the fraud detection component 108 to classify the users' login attempts.
- the set of features are based mainly on statistical data or stats of the features (e.g., IP addresses) extracted by the message collection and analysis component 106 from login attempts to a large number of user accounts across a wide range of entities.
- the message collection and analysis component 106 is configured to process the collected data of the login attempts from the electronic messaging system 116 to yield an augmented dataset that provides statistical data of the features (e.g., IP addresses) of the login attempts for training of the ML model discussed below.
- the message collection and analysis component 106 is configured to generate the statistical data of the login attempts by grouping the collected data by their IP addresses and aggregating the statistical features of the IP addresses. Each datapoint in the resulting dataset is a distinct IP address and the corresponding columns are the statistical data connected with the IP address.
- the IP dataset is used to create the IP reputation model discussed above.
- the statistical data/stats of the login attempts from each IP address includes but are not limited to one or more of total number of failed login attempts from the IP address (e.g., IPBadLogins), total number of successful login attempts from the IP address (e.g., IPGoodLogins), total number of distinct users with failed login attempts from the IP address (e.g., IPBadUsers), total number of distinct users with successful login attempts from the IP address (e.g., IPGoodUsers), total number of distinct organizations with failed login attempts from the IP address (e.g., IPBadOrgs), and total number of distinct organizations with successful login attempts from the IP address (e.g., IPGoodOrgs).
- IPBadLogins total number of successful login attempts from the IP address
- IPGoodLogins total number of successful login attempts from the IP address
- IPGoodLogins total number of distinct users with failed login attempts from the IP address
- IPGoodUsers total number of distinct users with failed login attempts from the IP address
- the message collection and analysis component 106 is configured to parse the set of features/attributes to extract an identification or ID of the user (e.g., user agent) to generate a binary flag, e.g., UserAgentFlag, based on the stats of whether the user is often correlated with ATO attacks or not.
- the binary flag is set to value 1 if the user agent is suspicious and 0 otherwise.
- the IP reputation model adopts two classes of IP reputation—bad and good.
- the message collection and analysis component 106 is configured to collect a set of random samples of IP addresses that are known to be connected with ATO attacks that, e.g., exploit the password spraying behavior, in order to learn how to decide whether an IP address should be deemed as bad or good. Based on these samples and the collected statistical data for these IP addresses discussed above, the message collection and analysis component 106 is configured to train a K-Nearest Neighbor (KNN) model to capture data points of IP addresses with bad reputation for a labeled dataset.
- KNN K-Nearest Neighbor
- the message collection and analysis component 106 is configured to train another KNN model on a set of random samples of known legitimate logins from IP addresses that are known to be safe for the labeled dataset. The message collection and analysis component 106 is then configured to set criteria represented as a set of rules and split the collected IP addresses into the two classifiers or the labeled dataset(s), the bad or good reputation IP addresses based on the criteria. In some embodiments, the message collection and analysis component 106 is configured to form a dataset/classifier with the corresponding reputation by collecting random samples of user logins that took place from each dataset of bad or good reputation IP addresses. A user login is labelled as fraudulent if it took place from a bad reputation IP address. Otherwise, the user login is labelled as a legitimate login.
- the data anonymization component 107 of the AI engine 104 is configured to anonymize the labeled dataset of the login attempts used to train the ML model to preserve privacy of the users.
- the data anonymization component 107 is configured to apply different variants of data anonymization to the labeled dataset of the login attempts.
- the data anonymization component 107 is configured to apply an IP-subnet to hide a portion of the full IP address of each of the login attempts in the labeled dataset.
- the data anonymization component 107 is configured to adopt a country attribute to omit and replace the IP address from the labeled dataset.
- the data anonymization component 107 is configured to suppress personally identifying information (PH) portion of the collected data including but not limited to names and emails of the users from the labeled dataset used to train the ML model.
- PH personally identifying information
- the data anonymization component 107 is configured to compute accuracy, precision, and/or recall for the different variants of data anonymization to determine drop in the ML model's accuracy in detecting fraudulent user login attempts as a result of the dataset anonymization. Any drop in the ML model's accuracy in terms of detecting the ATO attacks as a consequence of the data anonymization can then be measured so that the cost or tradeoff between data anonymization and loss in the ML model's accuracy can be quantified and understood. Based on such tradeoff, the data anonymization component 107 is configured to determine a proper or optimal level of data anonymization based on the ML model's accuracy when trained with different variants of the anonymized data. With the ML model trained with the appropriate level of data anonymization, the AI engine 104 is able to achieve a balance between accuracy of ATO attack detection while preserving user data privacy.
- the fraud detection component 108 of the AI engine 104 is configured to train the ML model for ATO detection using the anonymized dataset/classifier.
- the fraud detection component 108 is configured to train the ML model using incomplete (instead of anonymized) dataset of user login attempts for ATO attack detection.
- training of the ML model may require the user to set various hyper-parameters, e.g. weights, that govern/tune the ML model's training process.
- the labeled dataset is split into training and testing through cross validation, which splits data into training and testing to make sure the designed ML model operates well in real life.
- the fraud detection component 108 is configured to determine an optimal set of parameters or features of the labeled dataset for training of the ML model by conducting cross-validation grid search, which takes different sets of data as training and testing over the different combinations of the hyper-parameters.
- the ML model for ATO detection can be used for detection of fraudulent new user login attempts.
- the fraud detection component 108 is configured to join/combine the IP address of the new login attempt with the features and/or attributes of the set of features previously extracted and analyzed.
- the fraud detection component 108 is then configured to feed the combined IP address of the new login attempt and the extracted features into the trained ML model to classify the new login attempt and output the determination of the new login attempt to a system administrator.
- the fraud detection component 108 is configured to detect an attack that specifically exploits password spraying, where the attack is able to obtain the credentials of a user to an email account and log in to the user's account to conduct malicious activities (note that the fraud detection component 108 is configured to also detect other more generic types and instances of cyber attacks).
- the password-spraying attack combines a large number of usernames with a single password and avoids account lock-out because it looks like a series of isolated failed logins.
- the password spraying attack exploits a plurality of user login/credential logs/dumps to find common variations of usernames and passwords in the user credential logs.
- the fraud detection component 108 is configured to maintain one or more ML models that detect suspicious logins and alert system administrator accordingly of a password spraying attack.
- FIG. 4 depicts a flowchart 400 of an example of a process to support user login-based ATO attack detection and monitoring via privacy-preserving machine learning.
- the flowchart 400 starts at step 402 , where data of a plurality of login attempts to a plurality of user accounts on an electronic messaging system are collected.
- the flowchart 400 continues to block 404 , where an IP reputation model is created by extracting a set of features from the collected data of the plurality of login attempts to determine reputations of IP addresses of the login attempts to classify the plurality of login attempts.
- the flowchart 400 continues to block 406 , where a labeled dataset is generated based on the reputations of IP addresses of a set of random samples of the plurality of login attempts with known good or bad reputation of their IP addresses.
- the flowchart 400 continues to block 408 , where the labeled dataset of the login attempts is anonymized to preserve privacy of users of the login attempts.
- the flowchart 400 continues to block 410 , where the anonymized labeled dataset of the login attempts is used to train a machine learning (ML) model for detection of account takeover (ATO) attacks.
- ML machine learning
- ATO account takeover
- One embodiment may be implemented using a conventional general purpose or a specialized digital computer or microprocessor(s) programmed according to the teachings of the present disclosure, as will be apparent to those skilled in the computer art.
- Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.
- the invention may also be implemented by the preparation of integrated circuits or by interconnecting an appropriate network of conventional component circuits, as will be readily apparent to those skilled in the art.
- the methods and system described herein may be at least partially embodied in the form of computer-implemented processes and apparatus for practicing those processes.
- the disclosed methods may also be at least partially embodied in the form of tangible, non-transitory machine readable storage media encoded with computer program code.
- the media may include, for example, RAMs, ROMs, CD-ROMs, DVD-ROMs, BD-ROMs, hard disk drives, flash memories, or any other non-transitory machine-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the method.
- the methods may also be at least partially embodied in the form of a computer into which computer program code is loaded and/or executed, such that, the computer becomes a special purpose computer for practicing the methods.
- the computer program code segments configure the processor to create specific logic circuits.
- the methods may alternatively be at least partially embodied in a digital signal processor formed of application specific integrated circuits for performing the methods.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Power Engineering (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Medical Informatics (AREA)
- Chemical & Material Sciences (AREA)
- Evolutionary Computation (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Materials Engineering (AREA)
- Chemical Kinetics & Catalysis (AREA)
- Electrochemistry (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
- This application claims the benefit of U.S. Provisional Patent Application No. 63/037,988, filed Jun. 11, 2020, which is incorporated herein in its entirety by reference.
- This application is a continuation-in-part of U.S. patent application Ser. No. 16/947,074, filed Jul. 16, 2020, which is a continuation of U.S. patent application Ser. No. 16/363,596, filed Mar. 25, 2019, now U.S. Pat. No. 10,778,717, which claims the benefit of U.S. Provisional Patent Application No. 62/778,250, filed Dec. 11, 2018, and is a continuation-in-part of U.S. patent application Ser. No. 15/693,318, filed Aug. 31, 2017, all of which are incorporated herein in their entireties by reference.
- Cyber criminals are increasingly utilizing social engineering and deception to successfully conduct wire fraud and extract sensitive information from their targets. Spear phishing, also known as Business Email Compromise, is a cyber fraud where the attacker impersonates an employee and/or a system of the company by sending emails from a known or trusted sender in order to induce targeted individuals to wire money or reveal confidential information, is rapidly becoming the most devastating new cybersecurity threat. The attackers frequently embed personalized information in their electronic messages including names, emails, and signatures of individuals within a protected network to obtain funds, credentials, wire transfers and other sensitive information. Countless organizations and individuals have fallen prey, sending wire transfers and sensitive customer and employee information to attackers impersonating, e.g., their CEO, boss, or trusted colleagues. Note that such impersonation attacks do not always have to impersonate individuals, they can also impersonate a system or component that can send or receive electronic messages. For a non-limiting example, a networked printer on a company's internal network has been used by the so-called printer repo scam to initiate impersonation attacks against individuals of the company.
- One specific type of attacks, email account takeover (ATO), where an attacker steals credentials of an email account and uses the email account to attack accounts of other internal and/or external users, has been on the rise. According to a recent report issued by FBI, over $12 billion worth of assets have been lost due to business email account takeover and compromise incidents. ATO is a type of identity fraud, which hits an alltime high with 16.7 million U.S. victims in 2017. In an ATO, an attacker gets illegitimate access to a user's account through a malicious login. Detecting the compromised user accounts and stopping the attackers before data is ex-filtrated, destroyed, or the account is used in nefarious actions is the goal for ATO prevention and remediation. In some cases, an ATO can be detected through monitoring organizations' internal email traffic and classifying malicious emails sent from one employee to another within the same organization. An ATO can also be detected using statistical behavior features extracted from graph topology including but not limited to, success out-degree proportion, reverse page-rank, recipient clustering coefficient and legitimate recipient proportion. Another method of login-based detections is to monitor the login attempts for employees and building models to find suspicious activity such as irregular login locations, times, devices, or browsers.
- Existing email security solutions, however, are ineffective at detecting these attacks because the emails launched from the compromised accounts come from a legitimate sender, and therefore headers of the emails contain no malicious signals. Specifically, while the approaches mentioned above are able to detect ATO in some scenarios, there exist multiple other scenarios where the attacker activity emulates a legitimate user's behavior, which renders the attack undetected. One non-limiting example of such attack scenarios is the password spraying attack, detection relies on collecting user login data on a large-scale. While it is appealing to publish or share such data, disclosing such data comes with privacy concerns. As such, data owners have to anonymize attributes of user login data to avoid any potential privacy loss before disclosing the data. The effect of data anonymization on accuracy of machine learning models trained on the disclosed data, however, has never been addressed. Even worse, traditional email security solutions are typically located at the gateway or firewall to the internal network, e.g., they reside between the external network and the organization's email server, and thus cannot monitor or stop internal emails. An efficient approach to deal with email account takeover attacks is needed.
- The foregoing examples of the related art and limitations related therewith are intended to be illustrative and not exclusive. Other limitations of the related art will become apparent upon a reading of the specification and a study of the drawings.
- Aspects of the present disclosure are best understood from the following detailed description when read with the accompanying figures. It is noted that, in accordance with the standard practice in the industry, various features are not drawn to scale. In fact, the dimensions of the various features may be arbitrarily increased or reduced for clarity of discussion.
-
FIG. 1 depicts an example of a system diagram to support email account takeover detection and remediation in accordance with some embodiments. -
FIG. 2 depicts a flowchart of an example of a process to support email account takeover detection and remediation in accordance with some embodiments. -
FIG. 3 depicts an example of a system diagram to support user login-based ATO detection via privacy-preserving machine learning in accordance with some embodiments. -
FIG. 4 depicts a flowchart of an example of a process to support user login-based ATO attack detection and monitoring via privacy-preserving machine learning in accordance with some embodiments. - The following disclosure provides many different embodiments, or examples, for implementing different features of the subject matter. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
- A new approach is proposed that contemplates systems and methods to support email account takeover detection and remediation by utilizing an artificial intelligence (AI) engine/classifier that detects and remediates such attacks in real time. The AI engine is configured to continuously monitor behaviors and identify communication patterns of an individual user on an electronic messaging system/communication platform of an entity/organization via application programming interface (API) call(s) to the electronic messaging system. Based on the identified communication patterns, the AI engine is configured to collect and utilize a variety of features and/or signals from an email sent from an internal email account of the entity, including but not limited to identities/identifications of the sender and recipients of the email, forwarding rules and IP logins to the email account, information about links embedded in the email as a function of how likely the links are to appear in the entity. The AI engine combines these signals to automatically detect whether the email account has been compromised by an external attacker and alert the individual user of the account and/or a system administrator accordingly in real time. In addition, the AI engine enables the parties to remediate the effects of the compromised email account by performing one or more of: searching for all malicious emails sent from the compromised email account, deleting or quarantining such emails from mailboxes of their recipients, notifying the recipients of the emails, and remediating any mailbox rules that the attacker may have setup on the compromised email account.
- Compared to traditional gateway-based security systems that only monitor and filter external communications, the proposed approach is capable of collecting and examining internal as well as external electronic messages exchanged with parties outside of the entity to identify communication patterns of the email account of the user within the entity. The proposed approach is further capable of detecting anomalous activities and email account takeover attempts in real-time, not offline or in hindsight, and allowing the user and/or administrator of the email account to promptly remediate the adverse effects of the compromised account.
- As used hereinafter, the term “user” (or “users”) refers not only to a person or human being, but also to a system or component that is configured to send and receive electronic messages and is thus also subject to an email account takeover attack. For a non-limiting example, such system or component can be but is not limited to a web-based application used by individuals of the entity.
-
FIG. 1 depicts an example of a system diagram 100 to support email account takeover detection and remediation. Although the diagrams depict components as functionally separate, such depiction is merely for illustrative purposes. It will be apparent that the components portrayed in this figure can be arbitrarily combined or divided into separate software, firmware and/or hardware components. Furthermore, it will also be apparent that such components, regardless of how they are combined or divided, can execute on the same host or multiple hosts, and wherein the multiple hosts can be connected by one or more networks. - In the example of
FIG. 1 , thesystem 100 includes at least an AI engine/classifier 104 having a message andanalysis component 106 and afraud detection component 108, and a plurality of databases including but not limited to a natural language processing (NLP)database 110, areputable domain database 112, and adomain popularity database 114, each running on one or more computing unit/appliance/hosts/server 102 with software instructions stored in a storage unit such as a non-volatile memory (also referred to as secondary memory) of the computing unit for practicing one or more processes. When the software instructions are executed, at least a subset of the software instructions is loaded into memory (also referred to as primary memory) by one of the computing units of thehost 102, which becomes a special purposed one for practicing the processes. The processes may also be at least partially embodied in thehost 102 into which computer program code is loaded and/or executed, such that, the host becomes a special purpose computing unit for practicing the processes. When implemented on a general-purpose computing unit, the computer program code segments configure the computing unit to create specific logic circuits. - In the example of
FIG. 1 , eachhost 102 can be a computing device, a communication device, a storage device, or any computing device capable of running a software component. For non-limiting examples, a computing device can be but is not limited to a laptop PC, a desktop PC, a tablet PC, or an x86 or ARM-based a server running Linux or other operating systems. - In the example of
FIG. 1 , theelectronic messaging system 116 can be but is not limited to, Office365/Outlook, Slack, LinkedIn, Facebook, Gmail, Skype, Google Hangouts, Salesforce, Zendesk, Twilio, or any communication platform capable of providing electronic messaging services to (e.g., send, receive, and/or archive electronic messages) to users within theentity 118. Here, theelectronic messaging system 116 can be hosted either on email servers (not shown) associated with theentity 118 or on services/servers provided by a third party. The servers are either located locally with theentity 118 or in a cloud over the Internet. The electronic messages being exchanged on theelectronic messaging system 116 include but are not limited to emails, instant messages, short messages, text messages, phone call transcripts, and social media posts, etc. - In the example of
FIG. 1 , thehost 102 has a communication interface (not shown), which enables theAI engine 104 and/or thedatabases host 102 to communicate withelectronic messaging system 116 and client devices (not shown) associated with users within an entity/organization/company 118 following certain communication protocols, such as TCP/IP, http, https, ftp, and sftp protocols, over one or more communication networks (not shown). Here, the communication networks can be but are not limited to, internet, intranet, wide area network (WAN), local area network (LAN), wireless network, Bluetooth, WiFi, and mobile communication network. The physical connections of the network and the communication protocols are well known to those of skill in the art. The client devices are utilized by the users within theentity 118 to interact with (e.g., send or receive electronic messages to and from) theelectronic messaging system 116, wherein the client devices reside either locally or remotely (e.g., in a cloud) from thehost 102. In some embodiments, the client devices can be but are not limited to, mobile/hand-held devices such as tablets, iPhones, iPads, Google's Android devices, and/or other types of mobile communication devices, PCs, such as laptop PCs and desktop PCs, and server machines. - During the operation of the
system 100, theAI engine 104 runs continuously on thehost 102. As soon as one or more new/incoming messages or emails have been sent internally by one user within theentity 114 from an email account on theelectronic messaging system 116 to another user within theentity 114, the message collection andanalysis component 106 of theAI engine 104 is configured to collect such new electronic messages sent as well as any new login attempt and/or any new mailbox rule change to the email account in real time. In some embodiments, the message collection andanalysis component 106 is configured to collect the electronic messages before the intended recipients of the electronic messages in theentity 118 receive it. In some embodiments, theAI engine 104 is optionally authorized by the entity/organization 118 via online authentication protocol (OATH) to access the moreelectronic messaging system 116 used by the users of theentity 118 to exchange electronic messages. In some embodiments, the message collection andanalysis component 106 is configured to retrieve the electronic messages automatically via programmable calls to one or more Application Programming Interfaces (APIs) to theelectronic communication system 116. Such automatic retrieval of electronic messages eliminates the need for manual input of data as required when, for a non-limiting example, scanning outgoing emails in relation to data leak prevention (“DLP”) configured to scan and identify leakage or loss of data. Through the API calls, the message collection andanalysis component 106 is configured to retrieve not only external electronic messages exchanged between the users of theentity 118 and individual users outside of theentity 118, but also internal electronic messages exchanged between users within theentity 118, which expands the scope of communication fraud detection to cover the scenario where security of one user within theentity 118 has been compromised during, for a non-limiting example, an email account takeover attack. - In some embodiments, the message collection and
analysis component 106 is configured to identify communication patterns of each user based on collected electronic messages sent or received by the user on theelectronic messaging system 116 over a certain period time, e.g., day, month, year, or since beginning of use. The electronic messages collected over a shorter or more recent time period may be used to identify a recent communication patterns of the user while the electronic messages collected over a longer period of time can be used to identify more reliable longer term communication patterns. In some embodiments, the message collection andanalysis component 106 is configured to collect the electronic messages from an electronic messaging server (e.g., an on-premises Exchange server) by using an installed email agent on the electronic messaging server or adopting a journaling rule (e.g., Bcc all emails) to retrieve the electronic messages from the electronic messaging server (or to block the electronic messages at a gateway). - In some embodiments, the message collection and
analysis component 106 is configured to use the unique communication patterns identified to examine and extract various features or signals from the collected electronic messages for email account takeover detection. For non-limiting examples, the electronic messages are examined for one or more of names or identifications of sender and recipient(s), email addresses and/or domains of the sender and the recipient(s), timestamp, and metadata of the electronic messages, forwarding rules and IP logins to the email account, information about links embedded in the emails as a function of how likely the links are to appear in theentity 118. In some embodiments, the message collection andanalysis component 106 is further configured to examine content of the electronic messages to extract sensitive information (e.g., legal, financial, position of the user within theentity 118, etc.) - In some embodiments, the
fraud detection component 108 is configured to first clean up content of the email sent from the email account by removing any headers, signatures, salutations, disclaimers, etc. from the mail. Thefraud detection component 108 is then configured to utilize one or more of the following features and/or criteria that are unique to the email account to make a determination of whether the email account has been compromised (e.g., taken over by an attacker) or not: -
- Number of embedded links in the email sent by the email account;
- Length of the longest URL in the email sent by the email account;
- How likely is every single word in the email sent by the account associated with a malicious email according to the
NLP database 110; - Is any of the domains in the email sent by the email account likely to be malicious, using both the scores from the
reputable domain database 112 and/or thedomain popularity database 114; - IP logins to the email account;
- Mailbox rule changes to the email account.
- In the example of
FIG. 1 , theNLP database 110 is configured to maintain a score for each word wherein the score represents the likelihood of the word to be associated with malicious (phishing) emails. In some embodiments, thefraud detection component 108 is configured to compute term frequency-inverse document frequency (TF-IDF) of each word offline based on a corpus of labeled malicious emails and a corpus of innocent emails to determine the likelihood of the word being malicious. - In the example of
FIG. 1 , thereputable domain database 112 is configured to store the likelihood of domains being legitimate for theentity 118. In some embodiments, thereputable domain database 112 includes domains that have been seen by the message collection andanalysis component 106 in internal communications more than a certain number of times over a certain period of time (e.g., the last few days). If a certain domain has been seen in internal communications often during a short period of time, it is deemed to be legitimate as it is unlikely to be associated with a phishing link even if the domain a relatively unpopular domain. - In the example of
FIG. 1 , thedomain popularity database 114 is configured to maintain statistics on popularity of domains of the electronic messages across the internet. The less popular a domain in the electronic messages is, the more likely the domain is to be a phishing link. - In some embodiments, the
fraud detection component 108 is configured to detect anomalous signals/features in attributes, metadata and/or content of the retrieved electronic messages for email account takeover detection. Here, the anomalous signals include but are not limited to, a same sender using another email address for the first time, replying to someone else in the email/electronic message chain, or sudden change in number of recipients of an electronic message. - In some embodiments, the
fraud detection component 108 of theAI engine 104 is configured to detect the fraudulent incoming messages that are part of a longer conversation that includes more than one electronic message, e.g., a chain of emails. Rather than simply examining the first message of the conversation, thefraud detection component 108 is configured to monitor all electronic messages in the conversation continuously in real time and will flag an electronic message in the conversation for block or quarantine at any point once a predetermined set of anomalous signals are detected. - Based on the feature and/or signals discussed above, the
fraud detection component 108 is configured to determine with a high degree of accuracy whether the email account is compromised by an email account takeover attack or other kinds of communication fraud and/or former/ongoing network threats, which include but are not limited to a personalized phishing attempt which entices the recipient to click on a link which may ask them to enter their credentials or download a virus, or an attacker hijacking an internal account and using it to communicate with other users in the organization or external parties. - If the
fraud detection component 108 determines that the email account has been compromised, it is configured to block (remove, delete, modify) or quarantine electronic messages sent from the compromised email account in real time, and automatically notify the user, intended recipient(s) of the electronic message and/or an administrator of theelectronic communication system 116 of the email account takeover attack. In addition, thefraud detection component 108 enables the notified parties to remediate the email account takeover incident by allowing them to search for any malicious emails sent from the compromised email account, delete or quarantine such emails from mailboxes of their recipients, notify the recipients of those emails, and delete and/or reset any malicious mailbox rules, e.g., inbox forwarding rules, which the attacker may have setup on the compromised email account. -
FIG. 2 depicts aflowchart 200 of an example of a process to support email account takeover detection and remediation. Although the figure depicts functional steps in a particular order for purposes of illustration, the processes are not limited to any particular order or arrangement of steps. One skilled in the relevant art will appreciate that the various steps portrayed in this figure could be omitted, rearranged, combined and/or adapted in various ways. - In the example of
FIG. 2 , theflowchart 200 starts atblock 202, where an internal electronic message sent from an email account of a user in an entity to another user in the entity automatically is collected in real time via an application programming interface (API) call to an electronic messaging system of the entity. Theflowchart 200 continues to block 204, where the collected electronic message is analyzed to extract a plurality of features and/or signals from the electronic message to determine if it is malicious for email account takeover detection. Theflowchart 200 continues to block 206, where the email account is determined with a high degree of accuracy as whether it has been compromised by an email account takeover attack based on the detected features and/or signals in the email. Theflowchart 200 continues to block 208, where electronic messages sent from the email account are searched, blocked, and quarantined in real time if it is determined that the email account has been compromised by the email account takeover attack. Theflowchart 200 ends atblock 210, where a user, one or more intended recipients of the electronic messages and/or an administrator of the electronic messaging system are notified of the email account takeover attack and are enabled to take one or more remediating actions in response to the email account takeover attack. - In some embodiments, a new privacy-preserving machine learning (ML) mechanism to user login-based account takeover (ATO) detection is proposed, which is configured to detect ATO attacks based on login attempts by users. Specifically, the proposed approach relies on assessing fraudulence confidence level of login IP addresses to classify the login attempts by the users. In some embodiments, a plurality of attributes/features in one or more user login data logs are extracted and used to build a labeled dataset for training a machine learning (ML) model that relies on statistics of the login attempts to classify and detect fraudulent logins. Here, the plurality of features include but are not limited to one or more of the IP address and/or the user of a login, a field that provides information about a device and/or a browser used in the login. These attributes make it possible to ascertain if a login attempt or instance by a user is suspicious based on the ML model. For a non-limiting example, if the IP address is used in a considerable number of failed login attempts and very few successful login attempts, the IP address tends to be a suspicious IP. Additionally, if the IP address shows up across multiple entities/companies with the same behavior, it is a confirmation that the IP address is illegitimate. Furthermore, if information of the browser used for the login is suspicious, it is another confirmation that the login attempt is an ATO attack activity rather than a random activity of a legitimate user.
- Under the proposed approach, the ML model is trained using anonymized user login data to preserve privacy of the users. In some embodiments, any drop in the ML model's accuracy in terms of detecting the ATO attacks as a consequence of the data anonymization is measured. A proper level of data anonymization can be determined based on the ML model's accuracy in detecting the ATO attacks when trained with different versions of the anonymized data. With the ML model trained with the proper level of anonymized data, the proposed approach is able to achieve a balance between accuracy of ATO attack detection while preserving user data privacy. In addition, the proposed approach is also applicable to train the ML model using incomplete (instead of anonymized) user login data for ATO attack detection.
-
FIG. 3 depicts an example of a system diagram 300 to support user login-based ATO detection via privacy-preserving machine learning. In addition to the message collection andanalysis component 106 and thefraud detection component 108 depicted inFIG. 1 and discussed above, theAI engine 104 further includes adata anonymization component 107 configured to anonymize data used to train various AI models as discussed in details below. - In the example of
FIG. 3 , the message collection andanalysis component 106 of theAI engine 104 is configured to collect data from theelectronic messaging system 116 including all login attempts to accounts of users in an entity on theelectronic messaging system 116. For a non-limiting example, the data can be collected from Microsoft Office 365 Azure Active Directory, wherein the data includes all login attempts to Office 365 accounts. In some embodiments, the collected data comprises a plurality of (e.g., up to 20) fields/attributes that provide information about one or more of entity/company, user name, date, device, browser, authentication, and IP address of each login attempt. In some embodiments, personally identifying information (PII) portion of the collected data including names and emails of the users are suppressed by thedata anonymization component 107 and only a subset (e.g., 6) of the attributes are utilized by the message collection andanalysis component 106 to form a set of possible quasi-identifiers. For a non-limiting example, the quasi-identifiers can be but are not limited to the IP address that each user or employee logged-in from, the entity/organization that the user belongs to, the country where each login attempt is originated, time of the login attempt, the operation result that indicates whether the login attempt was successful or not, and other extended attributes (e.g., Extended Properties) that provide information about the browser, device, and authentication used by the login attempt. In some embodiments, one or more additional attributes may be designated as the sensitive attributes. - In some embodiments, the message collection and
analysis component 106 of theAI engine 104 is configured to create an IP reputation model, wherein the IP reputation model relies on a set of features extracted from the collected login attempts to determine reputations of the IP addresses of the login attempts. The reputations of the IP addresses are then utilized by thefraud detection component 108 to classify the users' login attempts. In some embodiments, the set of features are based mainly on statistical data or stats of the features (e.g., IP addresses) extracted by the message collection andanalysis component 106 from login attempts to a large number of user accounts across a wide range of entities. In some embodiments, the message collection andanalysis component 106 is configured to process the collected data of the login attempts from theelectronic messaging system 116 to yield an augmented dataset that provides statistical data of the features (e.g., IP addresses) of the login attempts for training of the ML model discussed below. In some embodiments, the message collection andanalysis component 106 is configured to generate the statistical data of the login attempts by grouping the collected data by their IP addresses and aggregating the statistical features of the IP addresses. Each datapoint in the resulting dataset is a distinct IP address and the corresponding columns are the statistical data connected with the IP address. The IP dataset is used to create the IP reputation model discussed above. In some embodiments, the statistical data/stats of the login attempts from each IP address includes but are not limited to one or more of total number of failed login attempts from the IP address (e.g., IPBadLogins), total number of successful login attempts from the IP address (e.g., IPGoodLogins), total number of distinct users with failed login attempts from the IP address (e.g., IPBadUsers), total number of distinct users with successful login attempts from the IP address (e.g., IPGoodUsers), total number of distinct organizations with failed login attempts from the IP address (e.g., IPBadOrgs), and total number of distinct organizations with successful login attempts from the IP address (e.g., IPGoodOrgs). In some embodiments, the message collection andanalysis component 106 is configured to parse the set of features/attributes to extract an identification or ID of the user (e.g., user agent) to generate a binary flag, e.g., UserAgentFlag, based on the stats of whether the user is often correlated with ATO attacks or not. The binary flag is set to value 1 if the user agent is suspicious and 0 otherwise. - In some embodiments, the IP reputation model adopts two classes of IP reputation—bad and good. In some embodiments, the message collection and
analysis component 106 is configured to collect a set of random samples of IP addresses that are known to be connected with ATO attacks that, e.g., exploit the password spraying behavior, in order to learn how to decide whether an IP address should be deemed as bad or good. Based on these samples and the collected statistical data for these IP addresses discussed above, the message collection andanalysis component 106 is configured to train a K-Nearest Neighbor (KNN) model to capture data points of IP addresses with bad reputation for a labeled dataset. In some embodiments, the message collection andanalysis component 106 is configured to train another KNN model on a set of random samples of known legitimate logins from IP addresses that are known to be safe for the labeled dataset. The message collection andanalysis component 106 is then configured to set criteria represented as a set of rules and split the collected IP addresses into the two classifiers or the labeled dataset(s), the bad or good reputation IP addresses based on the criteria. In some embodiments, the message collection andanalysis component 106 is configured to form a dataset/classifier with the corresponding reputation by collecting random samples of user logins that took place from each dataset of bad or good reputation IP addresses. A user login is labelled as fraudulent if it took place from a bad reputation IP address. Otherwise, the user login is labelled as a legitimate login. - In the example of
FIG. 3 , thedata anonymization component 107 of theAI engine 104 is configured to anonymize the labeled dataset of the login attempts used to train the ML model to preserve privacy of the users. In some embodiments, thedata anonymization component 107 is configured to apply different variants of data anonymization to the labeled dataset of the login attempts. In some embodiments, thedata anonymization component 107 is configured to apply an IP-subnet to hide a portion of the full IP address of each of the login attempts in the labeled dataset. In some embodiments, thedata anonymization component 107 is configured to adopt a country attribute to omit and replace the IP address from the labeled dataset. In some embodiments, thedata anonymization component 107 is configured to suppress personally identifying information (PH) portion of the collected data including but not limited to names and emails of the users from the labeled dataset used to train the ML model. - In some embodiments, the
data anonymization component 107 is configured to compute accuracy, precision, and/or recall for the different variants of data anonymization to determine drop in the ML model's accuracy in detecting fraudulent user login attempts as a result of the dataset anonymization. Any drop in the ML model's accuracy in terms of detecting the ATO attacks as a consequence of the data anonymization can then be measured so that the cost or tradeoff between data anonymization and loss in the ML model's accuracy can be quantified and understood. Based on such tradeoff, thedata anonymization component 107 is configured to determine a proper or optimal level of data anonymization based on the ML model's accuracy when trained with different variants of the anonymized data. With the ML model trained with the appropriate level of data anonymization, theAI engine 104 is able to achieve a balance between accuracy of ATO attack detection while preserving user data privacy. - In some embodiments, the
fraud detection component 108 of theAI engine 104 is configured to train the ML model for ATO detection using the anonymized dataset/classifier. In some embodiments, thefraud detection component 108 is configured to train the ML model using incomplete (instead of anonymized) dataset of user login attempts for ATO attack detection. In some embodiments, training of the ML model may require the user to set various hyper-parameters, e.g. weights, that govern/tune the ML model's training process. In some embodiments, the labeled dataset is split into training and testing through cross validation, which splits data into training and testing to make sure the designed ML model operates well in real life. In some embodiments, thefraud detection component 108 is configured to determine an optimal set of parameters or features of the labeled dataset for training of the ML model by conducting cross-validation grid search, which takes different sets of data as training and testing over the different combinations of the hyper-parameters. Once the ML model has been trained by the optimal set of parameters of the datasets/classifiers, the ML model for ATO detection can be used for detection of fraudulent new user login attempts. Specifically, when a new login attempt is collected by the message collection andanalysis component 106, thefraud detection component 108 is configured to join/combine the IP address of the new login attempt with the features and/or attributes of the set of features previously extracted and analyzed. Thefraud detection component 108 is then configured to feed the combined IP address of the new login attempt and the extracted features into the trained ML model to classify the new login attempt and output the determination of the new login attempt to a system administrator. - In some embodiments, the
fraud detection component 108 is configured to detect an attack that specifically exploits password spraying, where the attack is able to obtain the credentials of a user to an email account and log in to the user's account to conduct malicious activities (note that thefraud detection component 108 is configured to also detect other more generic types and instances of cyber attacks). Unlike a brute-force attack where one username is used with many password variations, the password-spraying attack combines a large number of usernames with a single password and avoids account lock-out because it looks like a series of isolated failed logins. In some embodiments, the password spraying attack exploits a plurality of user login/credential logs/dumps to find common variations of usernames and passwords in the user credential logs. By timely collecting/extracting the data of the login attempts from a plurality of different user credential logs, thefraud detection component 108 is configured to maintain one or more ML models that detect suspicious logins and alert system administrator accordingly of a password spraying attack. -
FIG. 4 depicts aflowchart 400 of an example of a process to support user login-based ATO attack detection and monitoring via privacy-preserving machine learning. In the example ofFIG. 4 , theflowchart 400 starts atstep 402, where data of a plurality of login attempts to a plurality of user accounts on an electronic messaging system are collected. Theflowchart 400 continues to block 404, where an IP reputation model is created by extracting a set of features from the collected data of the plurality of login attempts to determine reputations of IP addresses of the login attempts to classify the plurality of login attempts. Theflowchart 400 continues to block 406, where a labeled dataset is generated based on the reputations of IP addresses of a set of random samples of the plurality of login attempts with known good or bad reputation of their IP addresses. Theflowchart 400 continues to block 408, where the labeled dataset of the login attempts is anonymized to preserve privacy of users of the login attempts. Theflowchart 400 continues to block 410, where the anonymized labeled dataset of the login attempts is used to train a machine learning (ML) model for detection of account takeover (ATO) attacks. Theflowchart 400 ends atblock 412, where a new login attempt is detected and classified by the trained ML model in combination with the set of extracted features to determine if the user login attempt is fraudulent or not. - One embodiment may be implemented using a conventional general purpose or a specialized digital computer or microprocessor(s) programmed according to the teachings of the present disclosure, as will be apparent to those skilled in the computer art. Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art. The invention may also be implemented by the preparation of integrated circuits or by interconnecting an appropriate network of conventional component circuits, as will be readily apparent to those skilled in the art.
- The methods and system described herein may be at least partially embodied in the form of computer-implemented processes and apparatus for practicing those processes. The disclosed methods may also be at least partially embodied in the form of tangible, non-transitory machine readable storage media encoded with computer program code. The media may include, for example, RAMs, ROMs, CD-ROMs, DVD-ROMs, BD-ROMs, hard disk drives, flash memories, or any other non-transitory machine-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the method. The methods may also be at least partially embodied in the form of a computer into which computer program code is loaded and/or executed, such that, the computer becomes a special purpose computer for practicing the methods. When implemented on a general-purpose processor, the computer program code segments configure the processor to create specific logic circuits. The methods may alternatively be at least partially embodied in a digital signal processor formed of application specific integrated circuits for performing the methods.
Claims (24)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/949,863 US11563757B2 (en) | 2017-08-31 | 2020-11-17 | System and method for email account takeover detection and remediation utilizing AI models |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/693,318 US20190028509A1 (en) | 2017-07-20 | 2017-08-31 | System and method for ai-based real-time communication fraud detection and prevention |
US201862778250P | 2018-12-11 | 2018-12-11 | |
US16/363,596 US10778717B2 (en) | 2017-08-31 | 2019-03-25 | System and method for email account takeover detection and remediation |
US202063037988P | 2020-06-11 | 2020-06-11 | |
US16/947,074 US11159565B2 (en) | 2017-08-31 | 2020-07-16 | System and method for email account takeover detection and remediation |
US16/949,863 US11563757B2 (en) | 2017-08-31 | 2020-11-17 | System and method for email account takeover detection and remediation utilizing AI models |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/947,074 Continuation-In-Part US11159565B2 (en) | 2017-08-31 | 2020-07-16 | System and method for email account takeover detection and remediation |
Publications (2)
Publication Number | Publication Date |
---|---|
US20210090816A1 true US20210090816A1 (en) | 2021-03-25 |
US11563757B2 US11563757B2 (en) | 2023-01-24 |
Family
ID=84957422
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/949,863 Active 2038-05-27 US11563757B2 (en) | 2017-08-31 | 2020-11-17 | System and method for email account takeover detection and remediation utilizing AI models |
Country Status (1)
Country | Link |
---|---|
US (1) | US11563757B2 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210288976A1 (en) * | 2020-03-13 | 2021-09-16 | Mcafee, Llc | Methods and apparatus to analyze network traffic for malicious activity |
US11165815B2 (en) * | 2019-10-28 | 2021-11-02 | Capital One Services, Llc | Systems and methods for cyber security alert triage |
US20210397903A1 (en) * | 2020-06-18 | 2021-12-23 | Zoho Corporation Private Limited | Machine learning powered user and entity behavior analysis |
CN114465816A (en) * | 2022-03-17 | 2022-05-10 | 中国工商银行股份有限公司 | Password spraying attack detection method, device, computer equipment and storage medium |
US20220166792A1 (en) * | 2018-10-31 | 2022-05-26 | SpyCloud, Inc. | Detecting use of compromised security credentials in private enterprise networks |
US20230089920A1 (en) * | 2021-09-17 | 2023-03-23 | Capital One Services, Llc | Methods and systems for identifying unauthorized logins |
US20230199002A1 (en) * | 2021-12-16 | 2023-06-22 | Paypal, Inc. | Detecting malicious email addresses using email metadata indicators |
US20240015176A1 (en) * | 2022-07-07 | 2024-01-11 | Bank Of America Corporation | Domain Age-Based Security System |
US20240430301A1 (en) * | 2023-06-21 | 2024-12-26 | Id.Me, Inc. | Systems and methods for determining social engineering attack using trained machine-learning based model |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180033089A1 (en) * | 2016-07-27 | 2018-02-01 | Intuit Inc. | Method and system for identifying and addressing potential account takeover activity in a financial system |
US20180091453A1 (en) * | 2016-09-26 | 2018-03-29 | Agari Data, Inc. | Multi-level security analysis and intermediate delivery of an electronic message |
US20180152471A1 (en) * | 2016-11-30 | 2018-05-31 | Agari Data, Inc. | Detecting computer security risk based on previously observed communications |
US10148683B1 (en) * | 2016-03-29 | 2018-12-04 | Microsoft Technology Licensing, Llc | ATO threat detection system |
US20190018956A1 (en) * | 2017-07-17 | 2019-01-17 | Sift Science, Inc. | System and methods for digital account threat detection |
US10666666B1 (en) * | 2017-12-08 | 2020-05-26 | Logichub, Inc. | Security intelligence automation platform using flows |
US20210226987A1 (en) * | 2019-12-31 | 2021-07-22 | Akamai Technologies, Inc. | Edge network-based account protection service |
Family Cites Families (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030051026A1 (en) | 2001-01-19 | 2003-03-13 | Carter Ernst B. | Network surveillance and security system |
US6941478B2 (en) | 2001-04-13 | 2005-09-06 | Nokia, Inc. | System and method for providing exploit protection with message tracking |
US7272853B2 (en) | 2003-06-04 | 2007-09-18 | Microsoft Corporation | Origination/destination features and lists for spam prevention |
US20060149821A1 (en) | 2005-01-04 | 2006-07-06 | International Business Machines Corporation | Detecting spam email using multiple spam classifiers |
US7845008B2 (en) | 2005-12-07 | 2010-11-30 | Lenovo (Singapore) Pte. Ltd. | Virus scanner for journaling file system |
US8819823B1 (en) | 2008-06-02 | 2014-08-26 | Symantec Corporation | Method and apparatus for notifying a recipient of a threat within previously communicated data |
US9235704B2 (en) | 2008-10-21 | 2016-01-12 | Lookout, Inc. | System and method for a scanning API |
US8667593B1 (en) | 2010-05-11 | 2014-03-04 | Re-Sec Technologies Ltd. | Methods and apparatuses for protecting against malicious software |
US20110296003A1 (en) | 2010-06-01 | 2011-12-01 | Microsoft Corporation | User account behavior techniques |
US9633201B1 (en) | 2012-03-01 | 2017-04-25 | The 41St Parameter, Inc. | Methods and systems for fraud containment |
US9501746B2 (en) | 2012-11-05 | 2016-11-22 | Astra Identity, Inc. | Systems and methods for electronic message analysis |
US8566938B1 (en) | 2012-11-05 | 2013-10-22 | Astra Identity, Inc. | System and method for electronic message analysis for phishing detection |
US9154514B1 (en) | 2012-11-05 | 2015-10-06 | Astra Identity, Inc. | Systems and methods for electronic message analysis |
US9560069B1 (en) | 2012-12-02 | 2017-01-31 | Symantec Corporation | Method and system for protection of messages in an electronic messaging system |
US10404745B2 (en) | 2013-08-30 | 2019-09-03 | Rakesh Verma | Automatic phishing email detection based on natural language processing techniques |
US10277628B1 (en) | 2013-09-16 | 2019-04-30 | ZapFraud, Inc. | Detecting phishing attempts |
US9882932B2 (en) | 2014-04-02 | 2018-01-30 | Deep Detection, Llc | Automated spear phishing system |
US11159545B2 (en) | 2015-04-10 | 2021-10-26 | Cofense Inc | Message platform for automated threat simulation, reporting, detection, and remediation |
US20170214701A1 (en) | 2016-01-24 | 2017-07-27 | Syed Kamran Hasan | Computer security based on artificial intelligence |
WO2017132170A1 (en) | 2016-01-26 | 2017-08-03 | ZapFraud, Inc. | Detection of business email compromise |
US10116678B2 (en) | 2016-02-25 | 2018-10-30 | Verrafid LLC | System for detecting fraudulent electronic communications impersonation, insider threats and attacks |
US9774626B1 (en) | 2016-08-17 | 2017-09-26 | Wombat Security Technologies, Inc. | Method and system for assessing and classifying reported potentially malicious messages in a cybersecurity system |
US10805314B2 (en) | 2017-05-19 | 2020-10-13 | Agari Data, Inc. | Using message context to evaluate security of requested data |
US11044267B2 (en) | 2016-11-30 | 2021-06-22 | Agari Data, Inc. | Using a measure of influence of sender in determining a security risk associated with an electronic message |
US20180225605A1 (en) | 2017-02-06 | 2018-08-09 | American Express Travel Related Services Company, Inc. | Risk assessment and alert system |
US10891627B2 (en) | 2017-02-15 | 2021-01-12 | Salesforce.Com, Inc. | Methods and apparatus for using artificial intelligence entities to provide information to an end user |
US10425444B2 (en) | 2017-06-07 | 2019-09-24 | Bae Systems Applied Intelligence Us Corp. | Social engineering attack prevention |
US11470029B2 (en) | 2017-10-31 | 2022-10-11 | Edgewave, Inc. | Analysis and reporting of suspicious email |
US20210058395A1 (en) | 2018-08-08 | 2021-02-25 | Rightquestion, Llc | Protection against phishing of two-factor authentication credentials |
-
2020
- 2020-11-17 US US16/949,863 patent/US11563757B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10148683B1 (en) * | 2016-03-29 | 2018-12-04 | Microsoft Technology Licensing, Llc | ATO threat detection system |
US20180033089A1 (en) * | 2016-07-27 | 2018-02-01 | Intuit Inc. | Method and system for identifying and addressing potential account takeover activity in a financial system |
US20180091453A1 (en) * | 2016-09-26 | 2018-03-29 | Agari Data, Inc. | Multi-level security analysis and intermediate delivery of an electronic message |
US20180152471A1 (en) * | 2016-11-30 | 2018-05-31 | Agari Data, Inc. | Detecting computer security risk based on previously observed communications |
US20190018956A1 (en) * | 2017-07-17 | 2019-01-17 | Sift Science, Inc. | System and methods for digital account threat detection |
US10666666B1 (en) * | 2017-12-08 | 2020-05-26 | Logichub, Inc. | Security intelligence automation platform using flows |
US20210226987A1 (en) * | 2019-12-31 | 2021-07-22 | Akamai Technologies, Inc. | Edge network-based account protection service |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12166794B2 (en) * | 2018-10-31 | 2024-12-10 | SpyCloud, Inc. | Detecting use of compromised security credentials in private enterprise networks |
US20220166792A1 (en) * | 2018-10-31 | 2022-05-26 | SpyCloud, Inc. | Detecting use of compromised security credentials in private enterprise networks |
US11750645B2 (en) * | 2018-10-31 | 2023-09-05 | SpyCloud, Inc. | Detecting use of compromised security credentials in private enterprise networks |
US11785040B2 (en) | 2019-10-28 | 2023-10-10 | Capital One Services, Llc | Systems and methods for cyber security alert triage |
US11165815B2 (en) * | 2019-10-28 | 2021-11-02 | Capital One Services, Llc | Systems and methods for cyber security alert triage |
US20210288976A1 (en) * | 2020-03-13 | 2021-09-16 | Mcafee, Llc | Methods and apparatus to analyze network traffic for malicious activity |
US11689550B2 (en) * | 2020-03-13 | 2023-06-27 | Mcafee, Llc | Methods and apparatus to analyze network traffic for malicious activity |
US20210397903A1 (en) * | 2020-06-18 | 2021-12-23 | Zoho Corporation Private Limited | Machine learning powered user and entity behavior analysis |
US20230089920A1 (en) * | 2021-09-17 | 2023-03-23 | Capital One Services, Llc | Methods and systems for identifying unauthorized logins |
US20230199002A1 (en) * | 2021-12-16 | 2023-06-22 | Paypal, Inc. | Detecting malicious email addresses using email metadata indicators |
US12284194B2 (en) * | 2021-12-16 | 2025-04-22 | Paypal, Inc. | Detecting malicious email addresses using email metadata indicators |
CN114465816A (en) * | 2022-03-17 | 2022-05-10 | 中国工商银行股份有限公司 | Password spraying attack detection method, device, computer equipment and storage medium |
US20240015176A1 (en) * | 2022-07-07 | 2024-01-11 | Bank Of America Corporation | Domain Age-Based Security System |
US20240430301A1 (en) * | 2023-06-21 | 2024-12-26 | Id.Me, Inc. | Systems and methods for determining social engineering attack using trained machine-learning based model |
Also Published As
Publication number | Publication date |
---|---|
US11563757B2 (en) | 2023-01-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10778717B2 (en) | System and method for email account takeover detection and remediation | |
US11563757B2 (en) | System and method for email account takeover detection and remediation utilizing AI models | |
US12184662B2 (en) | Message security assessment using sender identity profiles | |
US11102244B1 (en) | Automated intelligence gathering | |
US20190026461A1 (en) | System and method for electronic messaging threat scanning and detection | |
US11665195B2 (en) | System and method for email account takeover detection and remediation utilizing anonymized datasets | |
Ho et al. | Detecting and characterizing lateral phishing at scale | |
CN112567710B (en) | System and method for contaminating phishing campaign responses | |
US10715543B2 (en) | Detecting computer security risk based on previously observed communications | |
US20220070216A1 (en) | Phishing detection system and method of use | |
US9686297B2 (en) | Malicious message detection and processing | |
US11159565B2 (en) | System and method for email account takeover detection and remediation | |
CN112567707A (en) | Enhanced techniques for generating and deploying dynamic false user accounts | |
US11145221B2 (en) | Method and apparatus for neutralizing real cyber threats to training materials | |
Ranganayakulu et al. | Detecting malicious urls in e-mail–an implementation | |
Goenka et al. | A comprehensive survey of phishing: Mediums, intended targets, attack and defence techniques and a novel taxonomy | |
US11394722B2 (en) | Social media rule engine | |
WO2016067290A2 (en) | Method and system for mitigating malicious messages attacks | |
US11374972B2 (en) | Disinformation ecosystem for cyber threat intelligence collection | |
Wood et al. | Systematic Literature Review: Anti-Phishing Defences and Their Application to Before-the-click Phishing Email Detection | |
EP3195140B1 (en) | Malicious message detection and processing | |
Boggs et al. | Discovery of emergent malicious campaigns in cellular networks | |
Bharath et al. | Introduction to Social Engineering: The Human Element of Hacking | |
Xiaopeng et al. | A multi-dimensional spam filtering framework based on threat intelligence | |
Khanum | Phishing emails analysis and design automatic text generation tool for writing and sending phishing mail. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: BARRACUDA NETWORKS, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:IBRAHIM, MOHAMED HOSAM AFIFI;SCHWEIGHAUSER, MARCO;CIDON, ASAF;SIGNING DATES FROM 20201116 TO 20201117;REEL/FRAME:054419/0954 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: KKR LOAN ADMINISTRATION SERVICES LLC, AS COLLATERAL AGENT, NEW YORK Free format text: SECURITY INTEREST;ASSIGNOR:BARRACUDA NETWORKS, INC.;REEL/FRAME:061377/0231 Effective date: 20220815 Owner name: UBS AG, STAMFORD BRANCH, AS COLLATERAL AGENT, CONNECTICUT Free format text: SECURITY INTEREST;ASSIGNOR:BARRACUDA NETWORKS, INC.;REEL/FRAME:061377/0208 Effective date: 20220815 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: OAKTREE FUND ADMINISTRATION, LLC, AS COLLATERAL AGENT, CALIFORNIA Free format text: SECURITY INTEREST;ASSIGNOR:BARRACUDA NETWORKS, INC.;REEL/FRAME:070529/0123 Effective date: 20250314 |