Shuciran Pentesting Notes

Scanning a Malicious Pickle File using Picklescan

Introduction What is a pickle file? A pickle file is a binary file created using Python’s pickle module, which allows you to serialize i.e convert to a byte stream and deserialize i.e reconstruct P...

Sanitizing Prompts with LLM Guard

Introduction We may use a slightly modified version of the chatbot, but the core functionality will be the same. 1) We are going to load a model 2) We are going to load a tokenizer 3) We are goin...

Finding and Fixing Weaknesses in AI Code

Introduction Understanding Static Application Security Testing SAST helps in: 1) Finding the weaknesses in the code that might materialize into vulnerabilities. 2) Providing recommendations for f...

Analyzing and Fixing Vulnerabilities in Third-Party Components

Introduction Software Component Analysis (SCA) helps in: 1) Identifying the components used in a software. 2) Understanding the dependencies between the components. 3) Understanding the potential...

LLM Hallucination Lab

Introduction LLM hallucination refers to the phenomenon where a language model such as GPT, BERT, or other large language models generates information that is incorrect, nonsensical, or fabricated,...

Extracting Sensitive Information through an LLM

Introduction The LLM has some protections built in to itself to safeguard some types of confidential information. We will try to bypass those safeguards using cleverly written prompts. Requirement...

User Prompts and System Prompts

Requirements apt update apt install python3-pip -y mkdir llm-prompts cd llm-prompts cat >requirements.txt <<EOF transformers==4.48.3 torch==2.6.0 accelerate==1.8.1 einops==0.8.1 jinja2==3...

Prompt Injection Step by Step

Introduction We may use a slightly modified version of the chatbot, but the core functionality will be the same. 1) We are going to load a model 2) We are going to load a tokenizer 3) We are going...

Performing Sentiment Analysis Using an LLM

About Sentiment Analysis Sentimental Analysis is a crucial task in natural language processing (NLP) that involves determining the emotional tone or polarity of a given text, classifying it as posi...

Attacking an LLM Model using Prompt Injection

Prompt injection is a type of attack against AI language models where an attacker attempts to manipulate the model’s behavior by inserting carefully crafted text into the input prompt. This techniq...