About Me

My Story of Curiosity, Growth, and Purpose in Cybersecurity

Profile

My journey into cybersecurity began at the age of eighteen, when I started my career as a digital forensic investigator, analyzing electronic evidence in support of criminal prosecutions. Early in that role, I witnessed how technology could be weaponized how individuals with advanced technical skills could exploit systems to cause real harm. That experience fundamentally changed my perspective. I began to ask myself a question that has shaped my entire career: What if the same knowledge used to break systems could be used to strengthen them instead? That question marked the beginning of my transition from digital forensics to offensive security a field where understanding how things fail becomes the foundation for making them stronger. What started as an analytical curiosity quickly became a passion for ethical hacking, vulnerability research, and security engineering. By late 2020, I shifted my focus entirely toward penetration testing and vulnerability discovery, driven by a desire to use adversarial thinking as a tool for defense. To me, offensive security isn’t about exploitation it’s about comprehension. The deeper we understand how systems can be compromised, the more effectively we can design them to resist compromise. I believe that possessing the ability to cause harm but choosing instead to use that ability for good represents a higher professional calling. Helping organizations harden their defenses and prevent breaches provides a profound sense of meaning and purpose in my work. Since then, I’ve participated in vulnerability disclosure programs that contribute to securing international organizations, practiced continuously through CTFs and advanced simulation labs, and earned recognition in multiple Hall of Fame listings. One of my proudest milestones has been receiving a CVEs, awards, recognition for vulnerabilities I have discovered a validation of years of disciplined research and hands-on practice. That early foundation in forensic analysis taught me how to think like an investigator. Offensive security taught me how to think like an adversary. Combining both perspectives has become the cornerstone of how I approach every system I analyze, every vulnerability I uncover, and every solution I design.

My current work sits at the convergence of offensive security, automation, and applied machine learning, and I build attack-oriented automation that stresses real systems across diverse technology stacks to expose meaningful weaknesses and validate defensive assumptions. I design end-to-end pipelines that combine workflow orchestration, probabilistic decision logic, and LLM-driven reasoning so that reconnaissance, confirmation, and reporting become repeatable, auditable processes rather than ad-hoc activities; these pipelines incorporate agent-style components that synthesize telemetry and propose exploit strategies, integrations that connect automated scanners with reasoning modules to accelerate hypothesis testing, and instrumentation that captures the exact conditions necessary to reproduce findings reliably. I maintain modular testbeds for safety and repeatability, and I prioritize high-fidelity telemetry so that every experiment can be audited, triaged, and translated into defensible remediation steps. Complementing these automated workflows. I continuously refine my practical skills through daily, hands-on practice on TryHackMe and Hack The Box, where I validate ideas, recreate real-world exploit classes, and iterate on automation techniques; this disciplined practice underpins my engineering I am actively pursuing a personal milestone of a 2,000-day TryHackMe streak as a commitment to continuous learning and daily craftsmanship. That sustained cadence keeps me attuned to emergent attack surfaces and new defensive idioms, and it feeds directly into the tooling and test scenarios I develop in the lab. While automation scales hypothesis generation, human analysts remain central to validating context, interpreting risk, and shaping vendor-safe disclosures. A major strand of my research examines the security of machine learning systems themselves, and I study practical failure modes such as prompt injection that manipulates model behavior, model extraction techniques that approximate proprietary functionality, and data-leakage paths where sensitive inputs or training artifacts can be revealed. My primary, pragmatic research question asks whether AI-driven systems can rediscover or bypass mitigations after a patch and, if so, what semantic, environmental, or process conditions enable such regressions. In several of my experiments, this process has led to the identification of new vulnerabilities and the assignment of fresh CVEs to systems that were previously considered fully patched, demonstrating how adaptive automation can uncover overlooked flaws within supposedly secured components. I approach this question with layered experiments: I reproduce canonical vulnerabilities and their official fixes in controlled research environments, drive hybrid analyzers that combine static and dynamic analysis with model-assisted synthesis, and catalogue each instance where bypasses succeed, investigating whether the root cause lies in missing validation, brittle environment assumptions, or semantic gaps within the mitigation logic. The objective is not only to expose these weaknesses but to translate the findings into durable security insights, helping teams close the gap between patch deployment and true resilience. In addition to my AI-focused research, I apply the same adversarial engineering mindset across other high-impact platforms, extending my work into Web3 and decentralized systems. I am developing advanced testing frameworks and automation specifically for Web3 wallet security, with the aim of creating one of the world’s top ten frameworks for assessing and strengthening wallet resilience. This initiative focuses on building repeatable, instrumented testbeds and a structured methodology that enables both researchers and engineering teams to evaluate how wallets behave under real-world adversarial conditions. I use these controlled environments to study integration behaviors, trace the origins of security failures, and measure how defenses perform under simulated attacks. My goal is to transform wallet security testing into a reproducible science producing reliable data, practical guidance, and engineering-grade insights that help shape a safer, more trustworthy Web3 ecosystem. Across all domains, the goal is the same: to transform ad-hoc vulnerability discovery into a structured, reproducible science one that not only uncovers how and why systems fail but also translates those insights into actionable defense strategies. I aim to build research models that replace chance findings with repeatable methodologies, producing evidence-based remediation playbooks, adaptive detection guidance, and engineering patterns that can be applied across technologies. Every experiment, framework, and publication is designed to move the field closer to consistency, accountability, and measurable security progress ensuring that discoveries do more than expose flaws; they create lasting improvements that can be validated, taught, and scaled globally. Ethics and safety remain the foundation of everything I do. My research is driven by the belief that offensive security should strengthen, not disrupt, the ecosystems it touches. I follow a philosophy of responsible innovation every discovery, experiment, or publication must ultimately contribute to defense, awareness, and progress. I prioritize transparency, collaboration, and accountability, ensuring that the knowledge gained through offensive research leads to stronger protections rather than unnecessary exposure. Automation may accelerate what is possible, but it is human judgment, integrity, and intent that determine its purpose and impact. Ultimately, my work is about raising the standard for what it truly means to test systems in an era of intelligent, adaptive adversaries. Cybersecurity is no longer a matter of static checklists or isolated patches; it is a dynamic contest between human creativity and machine intelligence. My goal is to push that boundary forward to transform testing from a reactive process into a proactive discipline grounded in automation, empirical data, and ethical responsibility. By combining disciplined daily practice with structured experimentation and robust instrumentation, I aim to expose the subtle, high-impact failure modes that conventional testing often overlooks. Every exploit chain I analyze, every proof of concept I build, and every patch I retest is part of a larger effort to map how complex systems actually fail under intelligent pressure. This approach goes beyond simply breaking things it’s about understanding the intricate behaviors that emerge when humans, machines, and code interact at scale, and using that knowledge to engineer more resilient defenses. I believe the next frontier of security research lies in human–machine collaboration, where automated agents extend the reach of human intuition without replacing it. My tooling is designed to make that collaboration possible: scalable frameworks that detect weaknesses autonomously but still require human reasoning to interpret, prioritize, and remediate. This synthesis of automation and awareness allows vulnerabilities to be discovered and addressed before they can evolve into real-world exploits. Every project I lead is guided by a principled disclosure posture the belief that transparency, ethics, and respect for impact must coexist with innovation. Responsible disclosure isn’t just procedure; it’s the bridge between offensive insight and defensive progress. Through coordinated advisories, CVE reporting, and shared tooling, I aim to ensure that each discovery strengthens the ecosystem rather than destabilizing it. My passion for offensive security across all areas of technology drives me to explore emerging domains from cloud infrastructures and decentralized applications to AI systems and embedded devices each representing a unique blend of opportunity and risk. I see offensive research as a form of applied curiosity: the pursuit of understanding through controlled disruption. It’s the art of thinking like an adversary not to harm, but to reveal what must be protected most. In essence, my mission is to design a future where testing is intelligent, continuous, and ethical where every vulnerability uncovered leads to a stronger, more transparent, and more secure digital world. My research and tooling reflect that commitment to defensive impact: to discover deeply, measure precisely, and design fixes that endure when tested by the smartest, fastest, and most adaptive adversaries our era has ever known.