The science behind Microsoft Threat Protection: Attack modeling for finding and stopping evasive ransomware

Microsoft Threat Protection uses a data-driven approach for identifying lateral movement, combining industry-leading optics, expertise, and data science to deliver automated discovery of some of the most critical threats today.
The post The science behind Microsoft Threat Protection: Attack modeling for finding and stopping evasive ransomware appeared first on Microsoft Security. READ MORE HERE…

Read more

Data science for cybersecurity: A probabilistic time series model for detecting RDP inbound brute force attacks

Microsoft Defender ATP data scientists and threat hunters collaborate to use a data science-driven approach to detecting RDP brute force attacks to protect customers against real-world threats.
The post Data science for cybersecurity: A probabilistic time series model for detecting RDP inbound brute force attacks appeared first on Microsoft Security. READ MORE HERE…

Read more

In hot pursuit of elusive threats: AI-driven behavior-based blocking stops attacks in their tracks

Two new machine learning protection features within the behavioral blocking and containment capabilities in Microsoft Defender ATP specialize in detecting threats by analyzing behavior, adding new layers of protection after an attack has started running.
The post In hot pursuit of elusive threats: AI-driven behavior-based blocking stops attacks in their tracks appeared first on Microsoft Security. READ MORE HERE…

Read more

From unstructured data to actionable intelligence: Using machine learning for threat intelligence

Machine learning and natural language processing can automate the processing of unstructured text for insightful, actionable threat intelligence.
The post From unstructured data to actionable intelligence: Using machine learning for threat intelligence appeared first on Microsoft Security. READ MORE HERE…

Read more

New machine learning model sifts through the good to unearth the bad in evasive malware

Most machine learning models are trained on a mix of malicious and clean features. Attackers routinely try to throw these models off balance by stuffing clean features into malware. Monotonic models are resistant against adversarial attacks because they are trained differently: they only look for malicious features. The magic is this: Attackers can’t evade a monotonic model by adding clean features. To evade a monotonic model, an attacker would have to remove malicious features.
The post New machine learning model sifts through the good to unearth the bad in evasive malware appeared first on Microsoft Security. READ MORE HERE…

Read more