Newsletter Subject

Are You Patched and Up-to-Date?

From

packtpub.com

Email Address

austinm@packtpub.com

Sent On

Fri, Mar 17, 2023 05:03 PM

Email Preheader Text

6 critical vulnerabilities and 2 Zero-Days were sorted this Patch Tuesday! SecPro #92: Are You Patch

6 critical vulnerabilities and 2 Zero-Days were sorted this Patch Tuesday! [View this email in your browser]( SecPro #92: Are You Patched and Up-to-Date? Hello! Patch Tuesday rolled along again this week, so we're taking a look at the most pressing issues from the update! If you've not already rolled out these updates to your systems (firstly, why not?!), you will find a breakdown of the most serious issues that need your attention. Not a lot of time on your hands? Here's a quick preview to whet your appetite until later. [CVE-2023-23397]( - A 9.8 Exchange Server vulnerability that could allow an attacker to execute arbitrary code in the context of the system user, potentially leading to the compromise of the entire Exchange Server. [CVE-2023-23415]( - This vulnerability could allow an attacker to execute arbitrary code on the target system if the attacker can convince a user to connect to a malicious RDP server. Rated 9.8. [CVE-2023-21708]( - The vulnerability could allow an attacker to execute arbitrary code on the target system if the attacker has local access and can run a specially crafted application, due to a weakness in the CLFS. [CVE-2023-23392]( - The 9.8 vulnerability could allow an attacker to execute arbitrary code on the target system if the attacker sends a specially crafted message to the Windows Remote Management (WinRM) service. Needless to say, there's always something worth worrying about! If you're unable to update your systems, extensive (and sometimes potentially network-breaking...) mitigations are also available in Microsoft's documentation. Cheers! [Austin Miller]( Editor in Chief [_secpro]( [Packt _secpro Newsletter]( [The _secpro Website]( This week's highlights: - [Kali Purple]( - [Microsoft Patch Tuesday update!]( - [The Machine Learning for Cybersecurity Cookbook]( - [This Week's Survey]( And with that - on with the show! Reading from the UK or the US? Check out our offers on [Amazon.com]( and [Amazon.co.uk]( Announcing Kali Purple While announcing a new update to Kali Linux ([download]( or [update]( it here!), something rather special also appeared. A “dawn of a new era”, to quote the Kali team – [Kali Purple](. As the name suggests, this is a version of Kali Linux specifically set up for purple teams. Now, the Kali toolkit isn’t just for blasting through defenses; it’s for building them too. [Kali Purple]( First Things First... This is not a full rollout. The Kali Purple team has been referring to it as a “preview” of the full OS. Taken from the [Kali Linux & Friends Discord server]( Don’t expect a ready-made, organization-affirming toolkit to fall into your lap as soon as you install it. Right now, the team is working on a variety of features that they will add to the OS in time. Feeling Red? Feeling Blue? You Do You! The Kali Purple announcement comes with a sense of freedom. Hence the “you do you” tagline – no matter what you need to do as a secpro, you can do it with something from the Kali toolkit. Whether you’re working with a red team, a blue team, or a mixed purple team, Kali now has something to offer you. We all know how extensive Kali’s offensive options are – hence why the [_secpro]( readers voted for it as one of their favorite cybersecurity tools of 2022! But a dig through the Kali Purple feature list throws up some interesting features for the defensive line. Based on the [NIST Cybersecurity Framework 1.1]( this is designed to become a complete kit in the same way that regular Kali is a “plugin and play” toolkit for a pentester. What can I look forward to with Kali Purple? As you can expect with the Kali team, there is an impressive kit rolled out here. Here is the full list of all features that are included with Kali Purple: A reference architecture for the ultimate SOC In-A-Box; perfect for: - Learning - Practicing SOC analysis and threat hunting - Security control design and testing - Blue / Red / Purple teaming exercises - Kali spy vs. spy competitions (bare-knuckle Blue vs. Red) - Protection of small to medium size environments Over 100 defensive tools, such as: - [Arkime]( - Full packet capture and analysis - [CyberChef]( - The cyber Swiss army knife - Elastic Security - Security Information and Event Management - [GVM]( - Vulnerability scanner - [TheHive]( - Incident response platform - Malcolm - Network traffic analysis tool suite - [Suricata]( - Intrusion Detection System - Zeek - (another) Intrusion Detection System - All the usual [Kali tools]( Defensive tools [documentations]( [Pre-generated image]( Kali Autopilot - an attack script builder/framework for automated attacks Kali Purple Hub for the community to share: - Practice pcaps - Kali Autopilot scripts for blue teaming exercises [Community Wiki]( A defensive menu structure according to NIST CSF (National Institute of Standards and Technology Critical Infrastructure Cybersecurity): - Identify - Protect - Detect - Respond - Recover Kali Purple [Discord]( channels for community collaboration and fun And theme: installer, menu entries & Xfce! Plenty to keep you busy, to say the least! Although many blue teamers will know these tools like the backs of their hands, Kali Purple aims to be the defensive Swiss army knife counterpart to Kali 2023.1. Check out some of the features in action: Elastic SIEM [Elastic SIEM](http://) Malcolm [Malcolm](http://) New menu with tools arranged in accordance with the NIST CSF’s five purposes [New Kali Purple menu](http://) Got any questions? Check out the [Kali Purple documentation here]( or head to the [_secpro Discord server]( to share your thoughts. [JOIN US ON DISCORD!]( This Week's Editorial Article [Microsoft Patch Tuesday update!]( Another month, another Patch Tuesday to contend with. If you've not updated your systems yet (or if you're lucky enough not to have to...), here are a few compelling reasons to take this one very seriously. [Mastering Linux Security and Hardening]( A New Book from Packt! [Mastering Linux Security and Hardening]( - Prevent threat actors from compromising a Linux system - Use secure directories and strong passwords to create user accounts - Configure permissions to protect sensitive data [NEED SOMETHING NEW TO READ?]( Cybersecurity Fundamentals [Machine Learning for Cybersecurity Cookbook]( We're back with another excerpt from the [Machine Learning for Cybersecurity Cookbook]( This time, we're taking a look at how to tackle packed malware. For a full rundown on how to stuck into this problem, check out the book. [LIKE WHAT YOU SEE? CLICK HERE]( MalConv – end-to-end deep learning for malicious PE detection One of the new developments in static malware detection has been the use of deep learning for end-to-end machine learning for malware detection. In this setting, we completely skip all feature engineering; we need not have any knowledge of the PE header or other features that may be indicative of PE malware. We simply feed a stream of raw bytes into our neural network and train. This idea was first suggested in [(. This architecture has come to be known as MalConv, as shown in the following screenshot: [MalConv in diagram form](http://) Getting ready Preparation for this recipe involves installing a number of packages in pip, namely, keras, tensorflow, and tqdm. The command is as follows: pip install keras tensorflow tqdm In addition, benign and malicious files have been provided for you in the PE Samples Dataset folder in the root of the repository. Extract all archives named Benign PE Samples*.7z to a folder named Benign PE Samples, and extract all archives named Malicious PE Samples*.7z to a folder named Malicious PE Samples. How to do it... In this recipe, we detail how to train MalConv on raw PE files: - We import numpy for vector operations and tqdm to keep track of progress in our loops: import numpy as np from tqdm import tqdm - Define a function to embed a byte as a vector: def embed_bytes(byte): binary_string = "{0:08b}".format(byte) vec = np.zeros(8) for i in range(8): if binary_string[i] == "1": vec[i] = float(1) / 16 else: vec[i] = -float(1) / 16 return vec - Read in the locations of your raw PE samples and create a list of their labels: import os from os import listdir directories_with_labels = [("Benign PE Samples", 0), ("Malicious PE Samples", 1)] list_of_samples = labels = for dataset_path, label in directories_with_labels: samples = [f for f in listdir(dataset_path)] for file in samples: file_path = os.path.join(dataset_path, file) list_of_samples.append(file_path) labels.append(label) - Define a convenience function to read in the byte sequence of a file: def read_file(file_path): """Read the binary sequence of a file.""" with open(file_path, "rb") as binary_file: return binary_file.read() - Set a maximum length, maxSize, of bytes to read in per sample, embed all the bytes of the samples, and gather the result in X: max_size = 15000 num_samples = len(list_of_samples) X = np.zeros((num_samples, 8, max_size)) Y = np.asarray(labels) file_num = 0 for file in tqdm(list_of_samples): sample_byte_sequence = read_file(file) for i in range(min(max_size, len(sample_byte_sequence))): X[file_num, :, i] = embed_bytes(sample_byte_sequence[i]) file_num += 1 - Prepare an optimizer: from keras import optimizers my_opt = optimizers.SGD(lr=0.01, decay=1e-5, nesterov=True) - Utilize the Keras functional API to set up the deep neural network architecture: from keras import Input from keras.layers import Conv1D, Activation, multiply, GlobalMaxPool1D, Dense from keras import Model inputs = Input(shape=(8, maxSize)) conv1 = Conv1D(kernel_size=(128), filters=32, strides=(128), padding='same')(inputs) conv2 = Conv1D(kernel_size=(128), filters=32, strides=(128), padding='same')(inputs) a = Activation('sigmoid', name='sigmoid')(conv2) mul = multiply([conv1, a]) b = Activation('relu', name='relu')(mul) p = GlobalMaxPool1D()(b) d = Dense(16)(p) predictions = Dense(1, activation='sigmoid')(d) model = Model(inputs=inputs, outputs=predictions) - Compile the model and choose a batch size: model.compile(optimizer=my_opt, loss="binary_crossentropy", metrics=["acc"]) batch_size = 16 num_batches = int(num_samples / batch_size) - Train the model on batches: for batch_num in tqdm(range(num_batches)): batch = X[batch_num * batch_size : (batch_num + 1) * batch_size] model.train_on_batch( batch, Y[batch_num * batch_size : (batch_num + 1) * batch_size] ) How it works… We begin by importing numpy and tqdm (Step 1), a package that allows you to keep track of progress in a loop by showing a percentage progress bar. As part of feeding the raw bytes of a file into our deep neural network, we use a simple embedding of bytes in an 8-dimensional space, in which each bit of the byte corresponds to a coordinate of the vector (Step 2). A bit equal to 1 means that the corresponding coordinate is set to 1/16, whereas a bit value of 0 corresponds to a coordinate equal to -1/16. For example, 10010001 is embedded as the vector (1/16, -1/16, -1/16, 1/16, -1/16, -1/16, -1/16, 1/16). Other ways to perform embeddings, such as ones that are trained along with the neural network, are possible. The MalConv architecture makes a simple, but computationally fast, choice. In Step 3, we list our samples and their labels, and, in Step 4, we define a function to read the bytes of a file. Note the rb setting in place of r, so as to read the file as a byte sequence. In Step 5, we use tqdm to track the progress of the loop. For each file, we read in the byte sequence and embed each byte into an 8-dimensional space. We then gather all of these into X. If the number of bytes exceeds maxSize=15000, then we stop. If the number of bytes is smaller than maxSize, then the bytes are assumed to be 0s. The maxSize parameter, which controls how many bytes we read per file, can be tuned according to memory capacity, the amount of computation available, and the size of the samples. In the following steps (Steps 6 and 7), we define a standard optimizer, namely, a stochastic gradient descent with a selection of parameters, and define the architecture of our neural network to match closely with that of MalConv. Note that we have used the Keras functional API here, which allows us to create non-trivial, input-output relations in our model. Finally, note that better architectures and choices of parameters are an open area of research. Continuing, we are now free to select a batch size and begin training (Steps 8 and 9). The batch size is an important parameter that can affect both speed and stability of the learning process. For our purposes, we have made a simple choice. We feed in a batch at a time, and train our neural network. Have You Tried...? Since we're still brimming with excitement over Kali Purple, here are some Kali resources to keep you busy. - [offensive-security/kali-linux-recipes]( - What it says on the tin. - [NoorQureshi/kali-linux-cheatsheet]( - Need a quick reference guide? Click right here. - [jiansiting/Kali-Windows]( - A Kali-like toolkit built on Windows 10. - [pwnwiki/kaliwiki]( - An unofficial Kali documentation project. - [b-ramsey/homebrew-kali]( - A homebrew tap for Kali Linux tools on OS X [FORWARDED THIS EMAIL? SIGN UP HERE]( [NOT FOR YOU? UNSUBSCRIBE HERE]( Copyright © 2023 Packt Publishing, All rights reserved. As a GDPR-compliant company, we want you to know why you’re getting this email. The _secpro team, as a part of Packt Publishing, believes that you have a legitimate interest in our newsletter and the products associated with it. Our research shows that you opted-in for communication with Packt Publishing in the past and we think that your previous interest warrants our appropriate communication. If you do not feel that you should have received this or are no longer interested in _secpro, you can opt out of our emails using the unsubscribe link below. Our mailing address is: Packt Publishing Livery Place, 35 Livery StreetBirmingham, West Midlands, B3 2PB United Kingdom [Add us to your address book]( Want to change how you receive these emails? You can [update your preferences]( or [unsubscribe from this list](.

Marketing emails from packtpub.com

View More
Sent On

08/04/2024

Sent On

03/04/2024

Sent On

12/03/2024

Sent On

11/03/2024

Sent On

26/02/2024

Sent On

29/01/2024

Email Content Statistics

Subscribe Now

Subject Line Length

Data shows that subject lines with 6 to 10 words generated 21 percent higher open rate.

Subscribe Now

Average in this category

Subscribe Now

Number of Words

The more words in the content, the more time the user will need to spend reading. Get straight to the point with catchy short phrases and interesting photos and graphics.

Subscribe Now

Average in this category

Subscribe Now

Number of Images

More images or large images might cause the email to load slower. Aim for a balance of words and images.

Subscribe Now

Average in this category

Subscribe Now

Time to Read

Longer reading time requires more attention and patience from users. Aim for short phrases and catchy keywords.

Subscribe Now

Average in this category

Subscribe Now

Predicted open rate

Subscribe Now

Spam Score

Spam score is determined by a large number of checks performed on the content of the email. For the best delivery results, it is advised to lower your spam score as much as possible.

Subscribe Now

Flesch reading score

Flesch reading score measures how complex a text is. The lower the score, the more difficult the text is to read. The Flesch readability score uses the average length of your sentences (measured by the number of words) and the average number of syllables per word in an equation to calculate the reading ease. Text with a very high Flesch reading ease score (about 100) is straightforward and easy to read, with short sentences and no words of more than two syllables. Usually, a reading ease score of 60-70 is considered acceptable/normal for web copy.

Subscribe Now

Technologies

What powers this email? Every email we receive is parsed to determine the sending ESP and any additional email technologies used.

Subscribe Now

Email Size (not include images)

Font Used

No. Font Name
Subscribe Now

Copyright © 2019–2024 SimilarMail.