C++에서 이진수의 비트 수를 세는 방법
2025/01/13
Jinsoolve.
Categories
Tags
1월 안에는 꼭...
About
Created At: 2024/12/26
1 min read
Locales:
en
ko
The cost function of logistic regression is as follows:
Why does the cost function look like this?
To understand this, we first need to understand the concept of log-odds.
In linear regression, the prediction values range from to .
In logistic regression, however, there is a key difference: instead of predicting a value directly, it calculates a log-odd.
However, since the odds range from [0, ] and are asymmetric, they are difficult to use as-is.
To address this, a log function is applied as follows:
However, Logit still has a range of [, ], which makes it unsuitable for direct use as probabilities.
The sigmoid function is used to map the range to [0, 1], making it suitable for representing probabilities.
Let , then:
The sigmoid function helps transform the logit into a probability range. For better understanding, let’s illustrate this with a graph:
This process leads to the well-known sigmoid function graph.
Now, we have some understanding of what means.
But what does represent?
Let’s plot the graph of (with log = ln):
Since ranges from [0, 1], it looks like this:
The closer the prediction is to 1, the smaller the value of . Conversely, the further the prediction is from 1, the larger the value.
In other words, this represents a loss function: the worse the prediction, the higher the loss.
(Note that the cost function is the average of the loss functions across the entire dataset.)
Similarly, can be considered the loss function when predicting 0.
Thus, for all m data samples:
When a sample belongs to class 1, add .
When a sample belongs to class 0, add .
This gives the total loss value.
Hence, the following equation holds:
2025/01/13
2025/01/13
선형 회귀 모델에 대해 알아보자. 보스턴 집 가격 예측 문제를 예시로 들어서 설명하겠다.
2025/01/09
레이블이 없는 데이터들을 분석하여 비슷한 데이터들끼리 그룹으로 묶을 것이다.이를 군집으로 묶는다하여 클러스터링(clustering)이라 한다. - k-평균 알고리즘을 이용하여 클러스터 중심 찾기- 상향식 방법으로 계층적 군집 트리 만들기- 밀집도 기반의 군집 알고리즘을 사용하여 임의 모야을 가진 대상 구분하기
2025/01/09