<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://madusanakcs.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://madusanakcs.github.io/" rel="alternate" type="text/html" /><updated>2025-10-11T12:21:21+00:00</updated><id>https://madusanakcs.github.io/feed.xml</id><title type="html">Chamod Shyamal Madusan</title><subtitle>Write an awesome description for your new site here. You can edit this line in _config.yml. It will appear in your document head meta (for Google search results) and in your feed.xml site description.</subtitle><author><name>Chamod Shyamal Madusan</name></author><entry><title type="html">Who is the GOAT in F1 ?</title><link href="https://madusanakcs.github.io/blog/f1/" rel="alternate" type="text/html" title="Who is the GOAT in F1 ?" /><published>2025-02-03T00:00:00+00:00</published><updated>2025-02-03T00:00:00+00:00</updated><id>https://madusanakcs.github.io/blog/f1</id><content type="html" xml:base="https://madusanakcs.github.io/blog/f1/"><![CDATA[<p><img src="https://github.com/user-attachments/assets/c3842b3b-c5a3-484b-8cf8-07eaa22ab6cc" alt="image" /></p>

<p>In Formula 1, determining the “greatest” driver involves evaluating a combination of individual skills and the performance of their car. Key metrics include braking efficiency, cornering speed, throttle control, and steering precision. For instance, braking performance is assessed by how effectively a driver decelerates into corners, with data showing that drivers like Daniel Ricciardo and Pastor Maldonado have demonstrated exceptional braking skills in past races.</p>

<p>Similarly, cornering speed reflects a driver’s ability to navigate turns swiftly, which is crucial for maintaining competitive lap times. Throttle control measures how smoothly a driver applies acceleration, impacting both speed and tire management. Steering precision indicates how accurately a driver positions the car on the track, affecting overall performance.</p>

<p>However, a driver’s performance is also heavily influenced by the car’s capabilities. The Drag Reduction System (DRS), for example, allows drivers to adjust the rear wing to reduce aerodynamic drag and increase top speed, facilitating overtaking maneuvers.</p>

<p>Additionally, the car’s braking system, suspension setup, and engine power significantly affect a driver’s ability to execute maneuvers effectively. Therefore, while individual skill is vital, the synergy between driver and car performance is essential in determining the overall success in Formula 1 racing.</p>

<p>In Formula 1, the performance of a driver is significantly influenced by the quality of their car. A superior car can compensate for a driver’s shortcomings, while even the most skilled drivers may struggle with a less competitive vehicle. Therefore, evaluating a driver’s true talent requires considering the performance metrics of their car, including speed, reliability, and technological advancements.</p>

<p>Historically, certain cars have dominated the grid, showcasing exceptional engineering and design. For instance, the Ferrari F2004, used during the 2004 season, is renowned for its dominance, winning 15 out of 18 races and securing both the Drivers’ and Constructors’ Championships.</p>

<p>Similarly, the Red Bull RB19, introduced in 2023, has been a formidable contender, clinching victory in 15 out of 16 races, underscoring the impact of a well-engineered car on a driver’s success.</p>

<p><img src="https://github.com/user-attachments/assets/320ae6a1-fd7c-49ca-8c0c-2aa69cceea82" alt="image" /></p>

<p><img src="https://github.com/user-attachments/assets/35e7791f-d355-4a90-8acd-00262599aa98" alt="image" /></p>

<p><img src="https://github.com/user-attachments/assets/f53ad047-0427-47de-a843-5eac751f22c3" alt="image" /></p>

<p><img src="https://github.com/user-attachments/assets/85dfa2dc-5891-4051-b19c-30035924258f" alt="image" /></p>

<p><img src="https://github.com/user-attachments/assets/23cc19c9-ede7-4d8f-befd-d9e3b1678076" alt="image" /></p>

<p><img src="https://github.com/user-attachments/assets/d169850f-ea1e-4059-b04c-f6ac418ad4ed" alt="image" /></p>

<h1 id="formula-1-performance-score-ps-calculation">Formula 1 Performance Score (PS) Calculation</h1>

<p>The <strong>Performance Score (PS)</strong> for a Formula 1 driver is a unique metric designed to evaluate their performance within the context of their team and the entire season. It balances intra-team dominance with the driver’s contribution to the team’s success throughout the season.</p>

<h2 id="formula">Formula</h2>

<p>The Performance Score is calculated using the following formula:</p>

<p><img src="https://github.com/user-attachments/assets/0dd51289-0598-42ab-9136-b4b99ca3d9a5" alt="image" /></p>

<p>Where:</p>
<ul>
  <li><strong>Driver’s Points</strong>: The total points accumulated by the driver in the season.</li>
  <li><strong>Team Points</strong>: The total points accumulated by both drivers in the team for that season.</li>
  <li><strong>Season Points</strong>: The total points accumulated by all drivers in the entire season.</li>
</ul>

<h2 id="key-metrics">Key Metrics</h2>

<ol>
  <li><strong>Driver’s Points</strong>: The total number of points the driver has earned throughout the season.</li>
  <li><strong>Team Points</strong>: The combined points earned by both drivers in the team during the season.</li>
  <li><strong>Season Points</strong>: The total points earned by all drivers in the season.</li>
</ol>

<h2 id="purpose">Purpose</h2>

<p>This metric is designed to reward drivers who:</p>
<ul>
  <li><strong>Outperform their teammates</strong>: This is reflected in the ratio of <strong>Driver’s Points</strong> to <strong>Team Points</strong>.</li>
  <li><strong>Have a significant impact on their team’s success</strong>: This is captured by the ratio of <strong>Driver’s Points</strong> to <strong>Season Points</strong>, with a multiplier of 20 to emphasize the contribution to the overall championship.</li>
</ul>

<h2 id="explanation">Explanation</h2>

<p>The Performance Score combines two important factors:</p>
<ol>
  <li><strong>Intra-team dominance</strong>: By comparing the driver’s points to the team’s total points, the score rewards drivers who outperform their teammates.</li>
  <li><strong>Season-wide impact</strong>: The score also considers the driver’s contribution to the overall championship performance, adjusting for their impact on the season as a whole.</li>
</ol>

<p>This formula allows for a comprehensive evaluation of a driver’s legendary status in Formula 1 by recognizing both individual performance and team contribution.</p>

<p><img src="https://github.com/user-attachments/assets/dacf55af-ad12-49f8-824a-3d9f414bbdb7" alt="image" /></p>

<p>Based on the data from the tables, Max Verstappen stands out as the GOAT (Greatest of All Time) in the context of hybrid-era Formula 1. With a remarkable average performance score of 0.85 and four championships to his name, Verstappen’s dominance in recent seasons, especially his 2023 performance score of 0.96, surpasses other legendary drivers like Michael Schumacher, Lewis Hamilton, and Sebastian Vettel. His adaptability, coupled with unmatched consistency in the hybrid era, makes him a key figure in this era of Formula 1. Verstappen’s ability to perform at the highest level, even under intense pressure, highlights his unparalleled skill and evolution in the sport.</p>

<p>However, Lewis Hamilton and Michael Schumacher also remain in the conversation for the GOAT title due to their longevity, consistency, and dominant performances over the years. Hamilton, with seven championships and an average performance score of 0.81, has been a constant force, leading Mercedes during their most successful years. Schumacher, with his unmatched dominance in the early 2000s and seven titles, also brings a unique blend of tactical brilliance and relentless work ethic to the table. When compared to these greats, Verstappen’s performance in the hybrid era, especially against his competitors, positions him at the forefront, but the legacy of Hamilton and Schumacher ensures that the debate will continue for years to come.
<img src="https://github.com/user-attachments/assets/dd70c7e2-b60e-44fe-baef-c346a1ed519c" alt="image" /></p>]]></content><author><name>Chamod Shyamal Madusan</name></author><category term="Blog" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">High-Dimensional Geometry: Hypercubes and Gaussian</title><link href="https://madusanakcs.github.io/blog/cube/" rel="alternate" type="text/html" title="High-Dimensional Geometry: Hypercubes and Gaussian" /><published>2025-01-13T16:00:00+00:00</published><updated>2025-01-13T16:00:00+00:00</updated><id>https://madusanakcs.github.io/blog/cube</id><content type="html" xml:base="https://madusanakcs.github.io/blog/cube/"><![CDATA[<h2 id="concentration-of-volume-of-a-hypercube">Concentration of Volume of a Hypercube</h2>

<p>A p-dimensional unit hypercube is the subset of <img src="https://github.com/user-attachments/assets/b69a96c6-5ef9-4b38-a6ae-a1fb860b68a1" alt="image" /> defined as
<img src="https://github.com/user-attachments/assets/155eafe8-a47e-40ae-ad23-f8b1592b1bb1" alt="image" /></p>

<ul>
  <li>The hyper cube has <img src="https://github.com/user-attachments/assets/5cbe5866-e4ab-4631-914e-b04c56f091fc" alt="image" /> vertices</li>
  <li>
    <p>Therefore, the maximum length between any two points admits <img src="https://github.com/user-attachments/assets/c47e9172-4f4f-4ff7-b500-0c66994b6978" alt="image" /></p>
  </li>
  <li>As p increases,dmax also increases, therefore, the corners tend to stretch</li>
  <li>Since the volume is unity, the rest of thehypercube should shrink to
 keep the volume fixed</li>
  <li>The volume seems to concentrate at the corners as <img src="https://github.com/user-attachments/assets/d60d93f9-eae4-4c0e-80d3-9afd5e2c57a9" alt="image" /></li>
</ul>

<h3 id="concentration-of-volume-of-a-hypercube-at-its-corners">Concentration of Volume of a Hypercube at Its Corners</h3>

<p><img src="https://github.com/user-attachments/assets/3e06416a-406d-4a6b-b322-6d55ebe08a10" alt="image" /></p>

<p><img src="https://github.com/user-attachments/assets/4ef7b808-5c5c-4341-81f4-46fd076112a9" alt="image" /></p>

<h2 id="gaussians-in-high-dimension">Gaussians in High Dimension</h2>

<p><img src="https://github.com/user-attachments/assets/3da1c562-6d1f-4aaa-9fcc-22f05d321897" alt="image" />
<img src="https://github.com/user-attachments/assets/8033ca39-99d2-4dfd-8e79-9e8c36b68dae" alt="image" />
<img src="https://github.com/user-attachments/assets/0ad335df-6c8b-4c0a-acb7-fa06afd55711" alt="image" />
<img src="https://github.com/user-attachments/assets/5ecc60de-e401-4545-868f-6cdd2ff35d69" alt="image" />
<img src="https://github.com/user-attachments/assets/ef1826af-08ec-4a19-9f5f-f4915b0ab4be" alt="image" />
<img src="https://github.com/user-attachments/assets/3b5b62d4-0354-4e03-a973-0cce5a7e0fb0" alt="image" />
<img src="https://github.com/user-attachments/assets/06f22fd0-dd03-49e5-bf09-c692a84dd4ba" alt="image" /></p>

<h3 id="practical-implications">Practical Implications</h3>

<ol>
  <li><strong>Distance Metrics</strong>: Euclidean distances become less meaningful in high dimensions.</li>
  <li><strong>Normalization</strong>: Data should often be scaled to a unit sphere or hypercube.</li>
  <li><strong>Sampling</strong>: Random points in high dimensions are almost always near edges/shells.</li>
</ol>]]></content><author><name>Chamod Shyamal Madusan</name></author><category term="Blog" /><category term="Post Formats" /><category term="readability" /><category term="standard" /><summary type="html"><![CDATA[Concentration of Volume of a Hypercube]]></summary></entry><entry><title type="html">High-Dimensional Volume and Concentration - Sphere</title><link href="https://madusanakcs.github.io/blog/sphere/" rel="alternate" type="text/html" title="High-Dimensional Volume and Concentration - Sphere" /><published>2024-11-27T16:00:00+00:00</published><updated>2024-11-27T16:00:00+00:00</updated><id>https://madusanakcs.github.io/blog/sphere</id><content type="html" xml:base="https://madusanakcs.github.io/blog/sphere/"><![CDATA[<h1 id="1-volume-of-a-p-dimensional-unit-sphere"><strong>1. Volume of a p-Dimensional Unit Sphere</strong></h1>

<h3 id="definition"><strong>Definition</strong></h3>
<p>The p-dimensional unit sphere Sp is defined as</p>

<p><img src="https://github.com/user-attachments/assets/04bdacc8-279f-44d8-91e3-7033f07dc55c" alt="image" /></p>

<h2 id="volume-derivation"><strong>Volume Derivation</strong></h2>
<p>The volume Vp is computed recursively using polar coordinates and the Beta function:</p>
<ol>
  <li><strong>Recursive Integral</strong></li>
</ol>

<p><img src="https://github.com/user-attachments/assets/8d8fa7cb-65c1-443e-9b1a-cea1a608fcb5" alt="image" /></p>

<p><img src="https://github.com/user-attachments/assets/76e76da2-e4f3-4f65-8b6f-8285ea7180bd" alt="image" /></p>

<ol>
  <li><strong>Beta Function Substitution</strong>:</li>
</ol>

<p><img src="https://github.com/user-attachments/assets/765708b1-090f-434a-a6ab-b9625b6668bc" alt="image" /></p>

<ol>
  <li><strong>Final Formula</strong></li>
</ol>

<p><img src="https://github.com/user-attachments/assets/0a19c43f-85c6-45ad-99c8-d43e41f5793a" alt="image" /></p>

<h1 id="2-concentration-of-volume-near-the-equator"><strong>2. Concentration of Volume Near the Equator</strong></h1>

<p><img src="https://github.com/user-attachments/assets/c9c14e2e-50c3-4e36-91b3-91961194ed90" alt="image" /></p>

<h2 id="spherical-caps-and-volume-ratio"><strong>Spherical Caps and Volume Ratio</strong></h2>
<p>For a spherical cap at distance <img src="https://github.com/user-attachments/assets/e6fcc401-1aba-4637-93c1-a1531b50ee8f" alt="image" /> from the equator ( x1=0 ), the volume fraction is</p>

<p><img src="https://github.com/user-attachments/assets/bd709aac-d824-482a-9507-8355b35b6c34" alt="image" /></p>

<h4 id="asymptotic-analysis"><strong>Asymptotic Analysis</strong></h4>
<p>For large p, approximate <img src="https://github.com/user-attachments/assets/8de93fef-59da-4355-8d1d-cc1a6c2e6d9c" alt="image" /></p>

<p><img src="https://github.com/user-attachments/assets/655b9dce-b4e5-46ac-aa02-782a49ad0a6e" alt="image" /></p>

<p><img src="https://github.com/user-attachments/assets/0375aae1-586f-4e20-b448-283efad4cbbf" alt="image" /></p>

<h3 id="key-result"><strong>Key Result</strong></h3>
<p>For <img src="https://github.com/user-attachments/assets/397d5af8-636a-43a2-88aa-ea150400ae1e" alt="image" /></p>

<p><img src="https://github.com/user-attachments/assets/c24e66cb-7150-4e6f-b184-3700256b68e1" alt="image" /></p>

<p><img src="https://github.com/user-attachments/assets/ebaa2c07-739a-4175-92b5-b215fafd4516" alt="image" /></p>

<p><img src="https://github.com/user-attachments/assets/f4949736-0990-4bcc-95ee-bcaa32d280c5" alt="image" /></p>

<p><img src="https://github.com/user-attachments/assets/a76110a5-5af9-4f59-b338-7f219cca76a0" alt="image" /></p>

<p><strong>Interpretation</strong>: Over 90% of the volume lies within <img src="https://github.com/user-attachments/assets/b0d9a8f6-cece-4817-a561-14679d55cf73" alt="image" /> of the equator in high dimensions.</p>

<h1 id="3-concentration-in-an-annulus-at-the-boundary"><strong>3. Concentration in an Annulus at the Boundary</strong></h1>

<p><img src="https://github.com/user-attachments/assets/497716e8-08bd-4bae-ae92-d69fa121c15c" alt="image" /></p>

<h2 id="volume-of-an-annulus"><strong>Volume of an Annulus</strong></h2>
<p>The annulus <img src="https://github.com/user-attachments/assets/9c90755c-22cc-41b2-8b23-03da8d7f3c0a" alt="image" /> has volume</p>

<p><img src="https://github.com/user-attachments/assets/726a1753-de35-47ad-8cf0-4b9a1e141629" alt="image" /></p>

<h2 id="exponential-decay"><strong>Exponential Decay</strong></h2>
<p>For small <img src="https://github.com/user-attachments/assets/cf539daf-3dfe-4afc-a977-2bcb4f4fec19" alt="image" /></p>

<p><img src="https://github.com/user-attachments/assets/06d08523-6b18-4ce8-8c69-cf8424415344" alt="image" /></p>

<p><strong>Interpretation</strong>: For <img src="https://github.com/user-attachments/assets/99b18fda-513e-4d2d-9539-dab31b75a72b" alt="image" />
, nearly all volume concentrates in a thin shell of thickness <img src="https://github.com/user-attachments/assets/52cf3ae7-65fd-4e7b-8d09-2d0d26af8c12" alt="image" /></p>

<p><img src="https://github.com/user-attachments/assets/46b4fac6-f9bc-4548-85e0-d1b1cb197b0d" alt="image" /></p>]]></content><author><name>Chamod Shyamal Madusan</name></author><category term="Blog" /><category term="Post Formats" /><category term="readability" /><category term="standard" /><summary type="html"><![CDATA[1. Volume of a p-Dimensional Unit Sphere]]></summary></entry><entry><title type="html">Unity 6 vs Unreal Engine 5.5 Which One Is Better?</title><link href="https://madusanakcs.github.io/blog/unity-unreal/" rel="alternate" type="text/html" title="Unity 6 vs Unreal Engine 5.5 Which One Is Better?" /><published>2024-10-03T00:00:00+00:00</published><updated>2024-10-03T00:00:00+00:00</updated><id>https://madusanakcs.github.io/blog/unity-unreal</id><content type="html" xml:base="https://madusanakcs.github.io/blog/unity-unreal/"><![CDATA[<p>In My Experience, when comparing Unity 6 to Unreal Engine 5.5, the key difference lies in their strengths: while Unreal Engine 5.5 excels in pushing the boundaries of high-end graphics with features like Nanite and Lumen, making it ideal for AAA titles, Unity 6 prioritizes accessibility, scalability, and a user-friendly interface, making it a better choice for indie developers and projects where ease of use is crucial. Both engines are constantly evolving, but currently, Unreal Engine 5.5 is considered the leader in visual fidelity, while Unity 6 shines in its developer-friendly features and broad platform support.</p>

<h2 id="key-points-to-consider">Key Points to Consider</h2>

<h3 id="graphics">Graphics</h3>
<ul>
  <li><strong>Unreal Engine 5.5</strong> boasts superior graphical capabilities with advanced features like Nanite for high-detail geometry and Lumen for dynamic lighting, making it the go-to for visually stunning games.</li>
</ul>

<h3 id="ease-of-use">Ease of Use</h3>
<ul>
  <li><strong>Unity 6</strong> is generally considered more beginner-friendly with a simpler workflow and intuitive interface, making it easier to learn for new developers.</li>
</ul>

<h3 id="asset-store">Asset Store</h3>
<ul>
  <li>Both engines have extensive asset stores, but <strong>Unity’s</strong> is often cited as having a wider variety of readily available assets, particularly for 2D and mobile development.</li>
</ul>

<h3 id="platform-support">Platform Support</h3>
<ul>
  <li><strong>Unity 6</strong> provides strong support for a wider range of platforms, including mobile, web, and VR, whereas <strong>Unreal Engine</strong> might have a slight edge in high-end PC and console development.</li>
</ul>

<h2 id="unity-6-advantages">Unity 6 Advantages</h2>
<ul>
  <li><strong>User-friendly interface</strong>: Easier to learn and navigate for beginners.</li>
  <li><strong>Large asset store</strong>: Diverse collection of readily available assets.</li>
  <li><strong>Broad platform support</strong>: Seamless development across various platforms including mobile and web.</li>
  <li><strong>Improved performance optimizations</strong>: Recent updates in Unity 6 focus on performance enhancements.</li>
</ul>

<!-- Courtesy of embedresponsively.com -->

<div class="responsive-video-container">
    <iframe src="https://www.youtube-nocookie.com/embed/o1JIK5W3DRU" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
  </div>

<p class="text-center">Unity 6</p>

<h2 id="unreal-engine-55-advantages">Unreal Engine 5.5 Advantages</h2>
<ul>
  <li><strong>High-fidelity graphics</strong>: Leading-edge visual features like Nanite and Lumen for stunning visuals.</li>
  <li><strong>Advanced lighting systems</strong>: Real-time global illumination with high level of detail.</li>
  <li><strong>AAA game development focus</strong>: Designed for complex, visually demanding projects.</li>
  <li><strong>Cinematic tools</strong>: Powerful tools for creating high-quality cinematics.</li>
</ul>

<!-- Courtesy of embedresponsively.com -->

<div class="responsive-video-container">
    <iframe src="https://www.youtube-nocookie.com/embed/p9XgF3ijVRQ" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
  </div>

<p class="text-center">Unreal Engine 5.5</p>]]></content><author><name>Chamod Shyamal Madusan</name></author><category term="Blog" /><summary type="html"><![CDATA[In My Experience, when comparing Unity 6 to Unreal Engine 5.5, the key difference lies in their strengths: while Unreal Engine 5.5 excels in pushing the boundaries of high-end graphics with features like Nanite and Lumen, making it ideal for AAA titles, Unity 6 prioritizes accessibility, scalability, and a user-friendly interface, making it a better choice for indie developers and projects where ease of use is crucial. Both engines are constantly evolving, but currently, Unreal Engine 5.5 is considered the leader in visual fidelity, while Unity 6 shines in its developer-friendly features and broad platform support.]]></summary></entry><entry><title type="html">Brain Hemorrhage Detection and Localization</title><link href="https://madusanakcs.github.io/blog/brain/" rel="alternate" type="text/html" title="Brain Hemorrhage Detection and Localization" /><published>2023-11-16T00:00:00+00:00</published><updated>2023-11-16T00:00:00+00:00</updated><id>https://madusanakcs.github.io/blog/brain</id><content type="html" xml:base="https://madusanakcs.github.io/blog/brain/"><![CDATA[<p>Intracranial hemorrhage (ICH) is a critical medical condition characterized by bleeding within the intracranial vault. The
causes of ICH can vary, encompassing factors such as vascular abnormalities, venous infarction, tumors, traumatic
injuries, therapeutic anticoagulation, and cerebral aneurysms. Irrespective of the underlying cause, a hemorrhage within
the brain poses a severe threat to a patient’s health. Thus, timely and accurate diagnosis is paramount to the treatment
process and its ultimate success.
The conventional diagnostic approach for ICH involves a combination of patient medical history, physical examination,
and non-contrast computed tomography (CT) imaging of the brain. CT scans have proven invaluable in localizing
bleeding within the brain and providing insights into the primary causes of ICH. However, several challenges are
associated with the diagnosis and treatment of ICH. These include the urgency of the diagnostic process, the complexity
of decision-making, limited experience among novice radiologists, and the unfortunate fact that many emergencies occur
during nighttime hours. Therefore, there is a pressing need for computer-aided diagnostic tools to support medical
specialists in the accurate and rapid detection of intracranial hemorrhages. It is paramount that these automated tools
exhibit a high level of accuracy to serve their intended medical purposes.
Depending on the anatomic site of bleeding within the brain, different subtypes of ICH can be distinguished. These
subtypes include subdural hemorrhage (SDH), Chronic hemorrhage , epidural hemorrhage (EDH), intraparenchymal
hemorrhage (IPH), intraventricular hemorrhage (IVH), and subarachnoid hemorrhage (SAH). Each subtype presents
unique challenges in detection and classification due to their subtle differences and similarities, often requiring an
experienced observer to distinguish them accurately</p>

<p><img src="/assets/images/btype.png" alt="" /></p>

<p>In this post, we present a method for detecting various subtypes of intracranial hemorrhage in brain CT scans. Our
approach employs a double-branch CNN for feature extraction and leverages two different classifiers for precise
detection. We address the challenge of differentiating subtypes by training individual detectors for each ICH subtype.
Preprocessing, including skull removal and intensity window transformations, is applied before feature extraction and
classification. Our method is evaluated on a comprehensive dataset of head CT slices, and the results are compared with
state-of-the-art reference methods.
This report outlines the materials and methods used, presents the results, and discusses the contributions and implications
of our approach in the context of brain hemorrhage detection. By harnessing the capabilities of deep learning and pretrained models, we aim to advance the state of the art in medical imaging and contribute to the critical task of accurate and
rapid intracranial hemorrhage diagnosis.</p>

<h1 id="existing-alternatives-related-work">Existing Alternatives (Related Work)</h1>

<p>In the ever-evolving landscape of Brain Hemorrhage Detection, it is essential to take stock of the existing alternatives that have paved the way for innovative solutions. This section provides a comprehensive overview of the current state of the field and the alternatives that have been explored by researchers and practitioners.</p>

<h2 id="1-traditional-machine-learning-methods">1. Traditional Machine Learning Methods</h2>

<p>Historically, traditional machine learning techniques have been instrumental in Brain Hemorrhage Detection. Researchers, including Jones and colleagues [cite], have explored the application of methods such as Support Vector Machines (SVM) and Random Forests. These techniques laid the foundation for algorithmic approaches, demonstrating the significance of machine learning in the domain.</p>

<h2 id="2-convolutional-neural-networks-cnns">2. Convolutional Neural Networks (CNNs)</h2>

<p>The advent of Convolutional Neural Networks (CNNs) has heralded a new era in medical imaging and Brain Hemorrhage Detection. CNN architectures like ResNet and Inception have gained prominence due to their ability to extract intricate features and classify images with remarkable accuracy. The utilization of deep learning models has enhanced feature extraction, classification precision, and the potential for real-time detection.</p>

<h2 id="3-imaging-techniques-ct-and-mri">3. Imaging Techniques: CT and MRI</h2>

<p>Medical imaging techniques remain pivotal in this field. Computed Tomography (CT) scans, with their speed and availability, are the preferred choice in emergency scenarios. Conversely, Magnetic Resonance Imaging (MRI) offers superior soft-tissue contrast, enabling detailed assessments. The choice of imaging technique plays a crucial role in the accuracy and speed of hemorrhage detection.</p>

<h2 id="4-challenges-and-opportunities">4. Challenges and Opportunities</h2>

<p>Despite these alternatives, Brain Hemorrhage Detection encounters challenges, including the need for extensive annotated datasets and computational requirements. The scarcity of annotated data poses a barrier to the development of highly accurate models. Moreover, the computational demands can limit real-time applications. Nevertheless, recent advancements in transfer learning, model architectures, and ensemble methods have shown promise in addressing these limitations.</p>

<h1 id="methods">Methods</h1>

<p>Methodically, we adeptly loaded and meticulously analyzed the DICOM file using Python’s pydicom library. This file
houses invaluable medical image data replete with intricate metadata, encompassing vital patient information and critical
imaging parameters. We extracted essential information, including the patient’s name, study date, imaging modality, and
pixel spacing, laying the foundation for our comprehensive analysis. Furthermore, we meticulously transformed the pixel
data into a structured NumPy array, enabling sophisticated image processing and detailed examination. The determination
of the report type, a pivotal element embedded within the DICOM header, was carried out through scrutiny of specific
study, series, or instance attributes, acknowledging potential variations based on the structural intricacies of the DICOM
file. As a dedicated group, we are now poised to delve into an in-depth analysis and prepare a meticulous report based on
the meticulously acquired data.</p>

<h1 id="preprocessing">Preprocessing</h1>

<p>Preprocessing is a crucial step in CT image analysis that involves enhancing the quality of images by removing noise,
artifacts, and other distortions. It also involves standardizing the images to facilitate the learning process of deep neural
networks . The following are the types can be used to windowing dicom images:
  <img src="/assets/images/window.png" alt="" /></p>

<p>We are using the Sigmoid BSB window among the various windowing methods because it provides improved contrast
and visibility of blood and soft tissues in medical images, making it particularly effective for diagnosing conditions like
intracranial hemorrhages.</p>

<h1 id="data-preparation">Data Preparation</h1>

<p>The process of data preparation is illustrated in the diagram below, which outlines the key steps and procedures involved
in getting the data ready for analysis.
DataLoader class, a Python class designed for efficiently loading and processing data for machine learning projects. The
class includes features for batching, shuffling, and under sampling to handle diverse datasets. It provides flexibility for
preprocessing and data loading, making it a valuable tool for ML model training.</p>

<p><img src="/assets/images/csv.png" alt="" /></p>

<h1 id="cnn-model">CNN Model</h1>

<p>Here’s a refined version of your description with improved clarity and flow:</p>

<p>Our model architecture combines two powerful deep-learning components: a ResNet-based classification model and a YOLO-based object detection model.</p>

<p>The first component, built on the ResNet50V2 architecture, serves as the backbone for classifying different types of brain hemorrhages. ResNet50V2 is widely recognized for its deep residual blocks, which allow for efficient training of very deep networks. By processing medical images through multiple convolutional and pooling layers, this component extracts meaningful features, enabling precise categorization of hemorrhage types.</p>

<p>The second component, based on the YOLO (You Only Look Once) architecture, specializes in the precise localization of hemorrhages within brain scans. YOLO’s unique grid-based approach allows it to detect and predict bounding boxes efficiently, determining the exact location and size of the hemorrhage. Each grid cell within the image is responsible for identifying potential hemorrhages, outputting confidence scores and coordinates for accurate detection.</p>

<p>By combining these two models, our architecture provides a comprehensive solution for brain hemorrhage diagnosis. The ResNet-based classifier determines the hemorrhage type, while the YOLO-based object detection model pinpoints its location within the scan. This dual approach enhances both diagnostic accuracy and localization precision, ultimately supporting medical professionals in delivering timely and effective treatment for patients.</p>

<p><img src="/assets/images/bmeth.png" alt="" /></p>

<h1 id="results">Results</h1>

<p>The results of the model evaluation show promising performance metrics. The model achieved an test accuracy of
91.75%, indicating its ability to correctly classify data points. It also demonstrated a precision of 83.75%, emphasizing its
skill in correctly identifying positive cases. The recall score of 62.46% implies that the model effectively captures a
substantial portion of actual positive cases. Additionally, the area under the curve (AUC) value of 94.07% signifies a high
level of discrimination ability. Despite these strong metrics, the model’s F1 score of 42.84% suggests that there is room
for improvement in balancing precision and recall, as it represents the harmonic mean of these two measures. Overall,
these results indicate a well-performing model with a focus on improving its balance between precision and recall for
optimal performance.
In our training and evaluation process, we have followed a well-established split of our dataset, allocating 70% for
training, 20% for validation, and 10% for testing. This distribution allows us to systematically develop and fine-tune our
model. The 70% training data provides a substantial portion to train our model effectively, allowing it to learn and
generalize from the dataset. The 20% validation set plays a crucial role in assessing our model’s performance during
training, helping us to make adjustments and optimize its hyperparameters. Finally, the reserved 10% testing set serves
as an independent benchmark, enabling us to evaluate the model’s overall performance on unseen data. This approach
ensures a robust and unbiased assessment of our model’s capabilities, setting the foundation for reliable results and
valuable insights in our research</p>

<p><img src="/assets/images/image.png" alt="" /></p>

<h1 id="discussion">Discussion</h1>

<p>The recent release of the brain hemorrhage detection competition by the Radiological Society of North America (RSNA) represents a
pivotal moment in the field of medical imaging. This competition has made available the largest brain hemorrhage dataset to date,
offering researchers a valuable resource for advancing our understanding of this critical domain. However, a unique challenge arises
from this dataset: the precise location of hemorrhages is not explicitly delineated in each image, and the examinations do not use thin
slice series, which could impact certain diagnostic tasks.
In response to these challenges, our model presents an innovative solution. It possesses the capability to both classify the type of brain
hemorrhage and precisely locate it within the images. By incorporating classification and object detection techniques, our model offers
a comprehensive approach to the problem. This dual functionality has the potential to significantly enhance diagnostic accuracy and
assist in the identification and treatment of brain hemorrhages. Thus, our model becomes a valuable asset in making the most of the
RSNA dataset, addressing its challenges and contributing to the advancement of medical imaging in the context of brain hemorrhage
detection.
  <img src="/assets/images/bgraph.png" alt="" /></p>

<h1 id="future-improvements-and-extensions">Future Improvements and Extensions</h1>

<p>The future of brain hemorrhage detection using AI-powered technologies holds both promise and challenges. Advancements in this
area should focus on early detection and predictive models to anticipate and prevent severe hemorrhages. This entails leveraging
diverse and extensive datasets, augmented with various hemorrhage cases, demographics, and risk factors. Real-time monitoring
integration and portable, non-invasive imaging devices can offer continuous assessment, while interpretable AI and integration with
electronic health records can ensure transparent, holistic, and accurate patient evaluations. Continuous learning models, collaboration
among stakeholders, and strict adherence to privacy regulations will be pivotal for success. As the field progresses, clinical validation
and trials will be crucial to establish safety and effectiveness, particularly in ensuring that AI-based hemorrhage detection is accessible
across diverse healthcare settings. The evolution of these solutions depends on a multifaceted approach, integrating technology,
medical expertise, ethical considerations, and regulatory frameworks for their responsible and effective integration into healthcare
systems.</p>

<h1 id="used-resources">Used resources</h1>

<p>We strategically harnessed the power of Kaggle as a central resource. Kaggle played a pivotal role in our project for
several compelling reasons. First and foremost, it offered a robust and diverse collection of datasets, specifically tailored
to medical imaging, which formed the cornerstone of our research. Additionally, Kaggle’s collaborative environment
allowed us to engage with a thriving community of data scientists and researchers, enabling us to tap into their expertise,
seek solutions, and share insights. Crucially, Kaggle provided access to high-performance computing resources, including
GPUs like the NVIDIA Tesla P100, which greatly expedited the training of our deep learning models. This, in turn,
significantly enhanced the overall CPU and GPU performance of our system. The combined advantages of Kaggle’s data,
community, and computational resources made it the ideal platform for our brain hemorrhage detection project, ultimately
contributing to its success</p>]]></content><author><name>Chamod Shyamal Madusan</name></author><category term="Blog" /><category term="Post Formats" /><summary type="html"><![CDATA[Intracranial hemorrhage (ICH) is a critical medical condition characterized by bleeding within the intracranial vault. The causes of ICH can vary, encompassing factors such as vascular abnormalities, venous infarction, tumors, traumatic injuries, therapeutic anticoagulation, and cerebral aneurysms. Irrespective of the underlying cause, a hemorrhage within the brain poses a severe threat to a patient’s health. Thus, timely and accurate diagnosis is paramount to the treatment process and its ultimate success. The conventional diagnostic approach for ICH involves a combination of patient medical history, physical examination, and non-contrast computed tomography (CT) imaging of the brain. CT scans have proven invaluable in localizing bleeding within the brain and providing insights into the primary causes of ICH. However, several challenges are associated with the diagnosis and treatment of ICH. These include the urgency of the diagnostic process, the complexity of decision-making, limited experience among novice radiologists, and the unfortunate fact that many emergencies occur during nighttime hours. Therefore, there is a pressing need for computer-aided diagnostic tools to support medical specialists in the accurate and rapid detection of intracranial hemorrhages. It is paramount that these automated tools exhibit a high level of accuracy to serve their intended medical purposes. Depending on the anatomic site of bleeding within the brain, different subtypes of ICH can be distinguished. These subtypes include subdural hemorrhage (SDH), Chronic hemorrhage , epidural hemorrhage (EDH), intraparenchymal hemorrhage (IPH), intraventricular hemorrhage (IVH), and subarachnoid hemorrhage (SAH). Each subtype presents unique challenges in detection and classification due to their subtle differences and similarities, often requiring an experienced observer to distinguish them accurately]]></summary></entry></feed>