If 'B is more likely given A', then 'A is more likely given B' Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)What is a very intuitive way to teach the Bayes formula to undergraduates?Why does “explaining away” make intuitive sense?What is the intuition behind (M)ANCOVA and when/why should one use it?An example of r.v.s such that their distribution has more (conditional) independencies than their directed graphical modelIntuition for “weights” in simple linear regressionIntuitive explanation of the F-statistic formula?the intuition behind that the variance of increment for Brownian Motion is time intervalWhy is it that when the score variance gets LARGER we get MORE confident about the MLE estimate?Conditional probability question - given pdfIntuition behind infinite CRLBWhy do parametric models learn more slowly by design?
Can Pao de Queijo, and similar foods, be kosher for Passover?
Are variable time comparisons always a security risk in cryptography code?
Why is the black pepper both grey and black?
ListPlot join points by nearest neighbor rather than order
Can a non-EU citizen traveling with me come with me through the EU passport line?
What's the purpose of writing one's academic bio in 3rd person?
Is 1 ppb equal to 1 μg/kg?
How remove lines which are not equal in two buffers?
Can a drone be seen on TCAS?
What is a quick way to find the reverse complement in bash
How can I fade player when goes inside or outside of the area?
What would be the ideal power source for a cybernetic eye?
How is the internal pullup resistor in a microcontroller wired?
Sorting numerically
Why does Python start at index 1 when iterating an array backwards?
Is the Standard Deduction better than Itemized when both are the same amount?
Do you forfeit tax refunds/credits if you aren't required to and don't file by April 15?
What is this single-engine low-wing propeller plane?
Should gear shift center itself while in neutral?
cpython3 different behavior between running a file line by line in interpreter mode and "python3 file"
What is a 'tuning' of a guitar and why would you use it? Doesn't it make it more difficult to play?
What are the options for player characters to create traps?
Single word antonym of "flightless"
How can players work together to take actions that are otherwise impossible?
If 'B is more likely given A', then 'A is more likely given B'
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)What is a very intuitive way to teach the Bayes formula to undergraduates?Why does “explaining away” make intuitive sense?What is the intuition behind (M)ANCOVA and when/why should one use it?An example of r.v.s such that their distribution has more (conditional) independencies than their directed graphical modelIntuition for “weights” in simple linear regressionIntuitive explanation of the F-statistic formula?the intuition behind that the variance of increment for Brownian Motion is time intervalWhy is it that when the score variance gets LARGER we get MORE confident about the MLE estimate?Conditional probability question - given pdfIntuition behind infinite CRLBWhy do parametric models learn more slowly by design?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
I am trying to get a clearer intuition behind: "If $A$ makes $B$ more likely then $B$ makes $A$ more likely" i.e
Let $n(S)$ denote the size of the space in which $A$ and $B$ are, then
Claim: $P(B|A)>P(B)$ so $n(AB)/n(A) > n(B)/n(S)$
so $n(AB)/n(B) > n(A)/n(S)$
which is $P(A|B)>P(A)$
I understand the math, but why does this make intuitive sense?
probability inference conditional-probability intuition association-measure
$endgroup$
add a comment |
$begingroup$
I am trying to get a clearer intuition behind: "If $A$ makes $B$ more likely then $B$ makes $A$ more likely" i.e
Let $n(S)$ denote the size of the space in which $A$ and $B$ are, then
Claim: $P(B|A)>P(B)$ so $n(AB)/n(A) > n(B)/n(S)$
so $n(AB)/n(B) > n(A)/n(S)$
which is $P(A|B)>P(A)$
I understand the math, but why does this make intuitive sense?
probability inference conditional-probability intuition association-measure
$endgroup$
$begingroup$
I edited the question to removed the word 'make'. This question sounded a bit like those ambiguous questions on Facebook, those where you have to solve some algebraic sum with pictures and people get widely different answers due to different interpretations of the question. That is not something we want here. (an alternative is to close the question for being unclear and have the OP change it).
$endgroup$
– Martijn Weterings
1 hour ago
add a comment |
$begingroup$
I am trying to get a clearer intuition behind: "If $A$ makes $B$ more likely then $B$ makes $A$ more likely" i.e
Let $n(S)$ denote the size of the space in which $A$ and $B$ are, then
Claim: $P(B|A)>P(B)$ so $n(AB)/n(A) > n(B)/n(S)$
so $n(AB)/n(B) > n(A)/n(S)$
which is $P(A|B)>P(A)$
I understand the math, but why does this make intuitive sense?
probability inference conditional-probability intuition association-measure
$endgroup$
I am trying to get a clearer intuition behind: "If $A$ makes $B$ more likely then $B$ makes $A$ more likely" i.e
Let $n(S)$ denote the size of the space in which $A$ and $B$ are, then
Claim: $P(B|A)>P(B)$ so $n(AB)/n(A) > n(B)/n(S)$
so $n(AB)/n(B) > n(A)/n(S)$
which is $P(A|B)>P(A)$
I understand the math, but why does this make intuitive sense?
probability inference conditional-probability intuition association-measure
probability inference conditional-probability intuition association-measure
edited 1 hour ago
Martijn Weterings
14.8k2064
14.8k2064
asked yesterday
Rahul DeoraRahul Deora
12019
12019
$begingroup$
I edited the question to removed the word 'make'. This question sounded a bit like those ambiguous questions on Facebook, those where you have to solve some algebraic sum with pictures and people get widely different answers due to different interpretations of the question. That is not something we want here. (an alternative is to close the question for being unclear and have the OP change it).
$endgroup$
– Martijn Weterings
1 hour ago
add a comment |
$begingroup$
I edited the question to removed the word 'make'. This question sounded a bit like those ambiguous questions on Facebook, those where you have to solve some algebraic sum with pictures and people get widely different answers due to different interpretations of the question. That is not something we want here. (an alternative is to close the question for being unclear and have the OP change it).
$endgroup$
– Martijn Weterings
1 hour ago
$begingroup$
I edited the question to removed the word 'make'. This question sounded a bit like those ambiguous questions on Facebook, those where you have to solve some algebraic sum with pictures and people get widely different answers due to different interpretations of the question. That is not something we want here. (an alternative is to close the question for being unclear and have the OP change it).
$endgroup$
– Martijn Weterings
1 hour ago
$begingroup$
I edited the question to removed the word 'make'. This question sounded a bit like those ambiguous questions on Facebook, those where you have to solve some algebraic sum with pictures and people get widely different answers due to different interpretations of the question. That is not something we want here. (an alternative is to close the question for being unclear and have the OP change it).
$endgroup$
– Martijn Weterings
1 hour ago
add a comment |
12 Answers
12
active
oldest
votes
$begingroup$
I think another mathematical way of putting it may help. Consider the claim in the context of Bayes' rule:
Claim: if $P(B|A)>P(B)$ then $P(A|B) > P(A)$
Bayes' rule:
$$ P(A mid B) = fracP(B mid A) , P(A)P(B) $$
assuming $P(B)$ nonzero. Thus
$$fracB)P(A) = fracP(BP(B)$$
If $P(B|A)>P(B)$, then $fracP(BP(B) > 1$.
Then $fracB)P(A) > 1$, and so $P(A|B) > P(A)$.
This proves the claim and an even stronger conclusion - that the respective proportions of the likelihoods must be equal.
$endgroup$
$begingroup$
I liked this because it shows the stronger link "if A makes B x percent more likely, then B makes A x percent more likely"
$endgroup$
– probabilityislogic
9 hours ago
$begingroup$
@probabilityislogic Phrasing it that way introduces ambiguity. If the prior probability is 10%, and the posterior is 15%, did the probability increase 5% (15% minus 10%) or 50% (15% divided by 10%)?
$endgroup$
– Acccumulation
34 mins ago
add a comment |
$begingroup$
Well, I don't like the word "makes" in the question. That implies some sort of causality and causality usually doesn't reverse.
But you asked for intuition. So, I'd think about some examples, because that seems to spark intuition. Choose one you like:
If a person is a woman, it is more likely that the person voted for a Democrat.
If a person voted for a Democrat, it is more likely that the person is a woman.
If a man is a professional basketball center, it is more likely that he is over 2 meters tall.
If a man is over 2 meters tall, it is more likely that he is a basketball center.
If it is over 40 degrees Celsius, it is more likely that there will be a blackout.
If there has been a blackout, it is more likely that it is over 40 degrees.
And so on.
$endgroup$
4
$begingroup$
That's not about probability. That's about 1 to 1 relationships.
$endgroup$
– Peter Flom♦
yesterday
6
$begingroup$
@jww Imagine the statement "if it is raining, the street is wet" (and suppose that's a valid implication for the moment, while the converse is not). Now take a large number of "samples" in different times and places, where you record whether it's raining and whether the street is wet. The street will be wet in more of the samples where it's raining than the samples where it's not; but also, it will be raining in more of the samples where the street is wet than the samples where the street is dry. That's probability.
$endgroup$
– hobbs
yesterday
2
$begingroup$
Both phenomena are caused by the same implication; the implication only works one way, but observing the consequent makes it more likely that you're looking at a sample where the antecedent is true.
$endgroup$
– hobbs
yesterday
5
$begingroup$
@Barmar Sorry, but that partly demonstrates the correctness of my logic. Because say 36/25,000 is a whole lot higher than 1/150,000,000.
$endgroup$
– Peter Flom♦
21 hours ago
5
$begingroup$
More likely than someone who is less than 2 meters tall.
$endgroup$
– Peter Flom♦
21 hours ago
|
show 7 more comments
$begingroup$
To add on the answer by @Dasherman: What can it mean to say that two events are related, or maybe associated or correlated? Maybe we could for a definition compare the joint probability (Assuming $DeclareMathOperatorPmathbbP P(A)>0, P(B)>0$):
$$
eta(A,B)=fracP(A cap B)P(A) P(B)
$$
so if $eta$ is larger than one, $A$ and $B$ occurs together more often than under independence. Then we can say that $A$ and $B$ are positively related.
But now, using the definition of conditional probability, $fracP(A cap B)P(A) P(B)>1$ is an easy consequence of $P(B mid A) > P(B)$. But $fracP(A cap B)P(A) P(B)$ is completely symmetric in $A$ and $B$ (interchanging all occurrences of the symbol $A$ with $B$ and vice versa) leaves the same formulas, so is also equivalent with $P(A mid B) > P(A)$. That gives the result. So the intuition you ask for is that $eta(A,B)$ is symmetric in $A$ and $B$.
The answer by @gunes gave a practical example, and it is easy to make others the same way.
$endgroup$
add a comment |
$begingroup$
By way of intuition, real world examples such as Peter Flom gives are most helpful for some people. The other thing that commonly helps people is pictures. So, to cover most bases, let's have some pictures.
What we have here are two very basic diagrams showing probabilities. The first shows two independent predicates I'll call Red and Plain. It is clear that they are independent because the lines line up. The proportion of plain area that is red is the same as the proportion of stripy area that is red and is also the same as the total proportion that is red.
In the second image, we have non-independent distributions. Specifically, we have expanded some of the plain red area into the stripy area without changing the fact that it is red. Clearly then, being red makes being plain more likely.
Meanwhile, have a look at the plain side of that image. Clearly the proportion of the plain region that is red is greater than the proportion of the whole image that is red. That is because the plain region has been given a bunch more area and all of it is red.
So, red makes plain more likely, and plain makes red more likely.
What's actually happening here? A is evidence for B (that is, A makes B more likely) when the area that contains both A and B is bigger than would be predicted if they were independent. Because the intersection between A and B is the same as the intersection between B and A, that also implies that B is evidence for A.
One note of caution: although the argument above seems very symmetrical, it may not be the case that the strength of evidence in both directions is equal. For example, consider this third image.
Here the same thing has happened: plain red has eaten up territory previously belonging to stripy red. In fact, it has completely finished the job!
Note that the point being red outright guarantees plainness because there are no stripy red regions left. However a point being plain has not guaranteed redness, because there are still green regions left. Nevertheless, a point in the box being plain increases the chance that it is red, and a point being red increases the chance that it is plain. Both directions imply more likely, just not by the same amount.
New contributor
$endgroup$
$begingroup$
I like the images :) However It looks like either the images or the explanation is flipped:In the second image, we have non-independent distributions. Specifically, we have moved some of the stripy red area into the plain area without changing the fact that it is red. Clearly then, being red makes being plain more likely.
-- your second image has gained plain area than the first, so going from image 1 to 2 we have moved plain area into the striped area.
$endgroup$
– Pod
9 hours ago
$begingroup$
So, if I have venn diagram with some common A,B intersection area and all I do is increase that intersection area, I automatically add more A,B for the entire space(without making the space larger) and change/increase n(A)/n(S) and n(B)/n(S) as a consequence. Right? More comments?
$endgroup$
– Rahul Deora
7 hours ago
add a comment |
$begingroup$
If A makes B more likely, this means the events are somehow related. This relation works both ways.
If A makes B more likely, this means that A and B tend to happen together. This then means that B also makes A more likely.
$endgroup$
1
$begingroup$
This perhaps could use some expansion? Without a definition of related it is a bit empty.
$endgroup$
– mdewey
yesterday
2
$begingroup$
I was trying to stay away from anything rigorous, since OP asked for an intuitive explanation. You are right that it is quite empty as it is now, but I'm not sure how to expand it in an intuitive way. I have added an attempt.
$endgroup$
– Dasherman
yesterday
add a comment |
$begingroup$
If A makes B more likely, A has crucial information that B can infer about itself. Despite the fact that it might not contribute the same amount, that information is not lost the other way around. Eventually, we have two events that their occurrence support each other. I can’t seem to imagine a scenario where occurrence of A increases the likelihood of B, and occurrence of B decreases the likelihood of A. For example, if it rains, the floor will be wet with high probability, and if the floor is wet, it doesn’t mean that it rained but it doesn’t decrease the chances.
$endgroup$
add a comment |
$begingroup$
You can make the math more intuitive by imagining a contingency table.
$beginarraycc
beginarraycc
&& A & lnot A & \
&a+b+c+d & a+c & b+d \hline
B& a+b& a & b \
lnot B & c+d& c & d \
endarray
endarray$
When $A$ and $B$ are independent then the joint probabilities are products of the marginal probabilities $$beginarraycc
beginarraycc
&& A & lnot A & \
&1 & x & 1-x \hline
B& y& a=xy & b=(1-x)y \
lnot B & 1-y& c=x (1-y) & d=(1-x)(1-y)\
endarray
endarray$$ In such case you would have similar marginal and conditional probabilities, e.g. $P (A) = P (A|B) $ and $P (B)=P (B|A) $.When there is no independence then you could see this as leaving the parameters $a,b,c,d $ the same (as products of the margins) but with just an adjustment by $pm z $ $$beginarraycc
beginarraycc
&& A & lnot A & \
&1 & x & 1-x \hline
B& y& a+z & b-z \
lnot B & 1-y& c-z & d+z\
endarray
endarray$$You could see this $z$ as breaking the equality of the marginal and conditional probabilities or breaking the relationship for the joint probabilities being products of the marginal probabilities.
Now, from this point of view (of breaking these equalities) you can see that this breaking happens in two ways both for $P(A|B) neq P(A)$ and $P(B|A) neq P(B)$. And the inequality will be for both cases $>$ when $z$ is positive and $<$ when $z $ is negative.
So you could see the connection $P(A|B) > P(A)$ then $P(B|A) > P(B)$ via the joint probability $P(B,A) > P (A) P (B) $.
If A and B often happen together (joint probability is higher then product of marginal probabilities) then observing the one will make the (conditional) probability of the other higher.
$endgroup$
add a comment |
$begingroup$
You are told that Sam is a woman and Kim is a man, and one of the two wears make-up and the other does not. Who of them would you guess wears make-up?
You are told that Sam wears make-up and Kim doesn't, and one of the two is a man and one is a woman. Who would you guess is the woman?
$endgroup$
$begingroup$
It is not so straightforward to connect this to the original problem. What exactly is event A and what is event B? Here it seems more like some comparison of probabilities. Event A is 'x is a women' (not A is the event 'x is a man'). And event B is 'x wears makeup'. But now we suddenly have a Sam and a Kim, where does that come from and should we use anything of information about the subjective masculinity or femininity of their names?
$endgroup$
– Martijn Weterings
23 hours ago
add a comment |
$begingroup$
Suppose we denote the posterior-to-prior probability ratio of an event as:
$$Delta(A|B) equiv fracmathbbP(AmathbbP(A)$$
Then an alternative expression of Bayes' theorem (see this related post) is:
$$Delta(A|B)
= fracmathbbP(AmathbbP(A)
= fracmathbbP(A cap B)mathbbP(A) mathbbP(B)
= fracA)mathbbP(B)
= Delta(B|A).$$
The posterior-to-prior probability ratio tells us whether the argument event is made more or less likely by the occurrence of the conditioning event (and how much more or less likely). The above form of Bayes' theorem shows use that posterior-to-prior probability ratio is symmetric in the variables.$^dagger$ For example, if observing $B$ makes $A$ more likely than it was a priori, then observing $A$ makes $B$ more likely than it was a priori.
$^dagger$ Note that this is a probability rule, and so it should not be interpreted causally. This symmetry is true in a probabilistic sense for passive observation ---however, it is not true if you intervene in the system to change $A$ or $B$. In that latter case you would need to use causal operations (e.g., the $textdo$ operator) to find the effect of the change in the conditioning variable.
$endgroup$
add a comment |
$begingroup$
It seems there is some confusion between causation and correlation. Indeed, the question statement is false for causation, as can be seen by an example such as:
- If a dog is wearing a scarf, then it is a domesticated animal.
The following is not true:
- Seeing a domesticated animal wearing a scarf implies it is a dog.
- Seeing a domesticated dog implies it is wearing a scarf.
However, if you are thinking of probabilities (correlation) then it IS true:
- Dogs wearing scarfs are much more likely to be a domesticated animal than dogs not wearing scarfs (or animals in general for that matter)
The following is true:
- A domesticated animal wearing a scarf is more likely to be a dog than another animal.
- A domesticated dog is more likely to be wearing a scarf than a non-domesticated dog.
If this is not intuitive, think of a pool of animals including ants, dogs and cats. Dogs and cats can both be domesticated and wear scarfs, ants can't neither.
- If you increase the probability of domesticated animals in your pool, it also will mean you will increase the chance of seeing an animal wearing a scarf.
- If you increase the probability of either cats or dogs, then you will also increase the probability of seeing an animal wearing a scarf.
Being domesticated is the "secret" link between the animal and wearing a scarf, and that "secret" link will exert its influence both ways.
Edit: Giving an example to your question in the comments:
Imagine a world where animals are either Cats or Dogs. They can be either domesticated or not. They can wear a scarf or not. Imagine there exist 100 total animals, 50 Dogs and 50 Cats.
Now consider the statement A to be: "Dogs wearing scarfs are thrice as likely to be a domesticated animal than dogs not wearing scarfs".
If A is not true, then you can imagine that the world could be made of 50 Dogs, 25 of them domesticated (of which 10 wear scarfs), 25 of them wild (of which 10 wear scarfs). Same stats for cats.
Then, if you saw a domesticated animal in this world, it would have 50% chance of being a dog (25/50, 25 dogs out of 50 domesticated animals) and 40% chance of having a scarf (20/50, 10 Dogs and 10 Cats out of 50 domesticated animals).
However, if A is true, then you have a world where there are 50 Dogs, 25 of them domesticated (of which 15 wear scarfs), 25 of them wild (of which 5 wear scarfs). Cats maintain the old stats: 50 Cats, 25 of them domesticated (of which 10 wear scarfs), 25 of them wild (of which 10 wear scarfs).
Then, if you saw a domesticated animal in this world, it would have the same 50% chance of being a dog (25/50, 25 dogs out of 50 domesticated animals) but would have 50% (25/50, 15 Dogs and 10 Cats out of 50 domesticated animals).
As you can see, if you say that A is true, then if you saw a domesticated animal wearing a scarf in the world, it would be more likely a Dog (60% or 15/25) than any other animal (in this case Cat, 40% or 10/25).
New contributor
$endgroup$
$begingroup$
This is the line I have a problem with "A domesticated animal wearing a scarf is more likely to be a dog than another animal." When we made our initial statement we did not make any claim on the other animals that could wear scarfs. There could be 100s. We only made a statement about dogs.
$endgroup$
– Rahul Deora
8 hours ago
$begingroup$
See if my edit helps with your particular issue.
$endgroup$
– H4uZ
7 hours ago
add a comment |
$begingroup$
There is a confusion here between causation and correlation. So I'll give you an example where the exact opposite happens.
Some people are rich, some are poor. Some poor people are given benefits, which makes them less poor. But people who get benefits are still more likely to be poor, even with benefits.
If you are given benefits, that makes it more likely that you can afford cinema tickets. ("Makes it more likely" meaning causality). But if you can afford cinema tickets, that makes it less likely that you are among the people who are poor enough to get benefits, so if you can afford cinema tickets, you are less likely to get benefits.
$endgroup$
5
$begingroup$
This isn't an answer to the question. Interesting, but not an answer. In fact, it's talking about a different scenario; the reason the opposite happens is that it's using two different metrics that are named similarly (poor without benefits v.s. poor with benefits) and as such is a completely different scenario.
$endgroup$
– wizzwizz4
yesterday
add a comment |
$begingroup$
The intuition becomes clear if you look at the stronger statement:
If A implies B, then B makes A more likely.
Implication:
A true -> B true
A false -> B true or false
Reverse implication:
B true -> A true or false
B false -> A false
Obviously A is more likely to be true if B is known to be true as well, because if B was false then so would be A. The same logic applies to the weaker statement:
If A makes B more likely, then B makes A more likely.
Weak implication:
A true -> B true or (unlikely) false
A false -> B true or false
Reverse weak implication:
B true -> A true or false
B false -> A false or (unlikely) true
New contributor
$endgroup$
$begingroup$
I think what you are saying in the first statement is that in a venn diagram if A is contained in B, then if B is true n(A)/n(B) must be higher than n(A)/n(S) as B is a smaller space than S. Even in the second, you say likewise?
$endgroup$
– Rahul Deora
7 hours ago
$begingroup$
@RahulDeora - Yes, that's how it works. The weak version is much less obvious, but you already did the math anyway. What you asked for is the intuition behind the result, which can be best observed in the stronger statement.
$endgroup$
– Rainer P.
5 hours ago
add a comment |
protected by gung♦ 7 hours ago
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
12 Answers
12
active
oldest
votes
12 Answers
12
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I think another mathematical way of putting it may help. Consider the claim in the context of Bayes' rule:
Claim: if $P(B|A)>P(B)$ then $P(A|B) > P(A)$
Bayes' rule:
$$ P(A mid B) = fracP(B mid A) , P(A)P(B) $$
assuming $P(B)$ nonzero. Thus
$$fracB)P(A) = fracP(BP(B)$$
If $P(B|A)>P(B)$, then $fracP(BP(B) > 1$.
Then $fracB)P(A) > 1$, and so $P(A|B) > P(A)$.
This proves the claim and an even stronger conclusion - that the respective proportions of the likelihoods must be equal.
$endgroup$
$begingroup$
I liked this because it shows the stronger link "if A makes B x percent more likely, then B makes A x percent more likely"
$endgroup$
– probabilityislogic
9 hours ago
$begingroup$
@probabilityislogic Phrasing it that way introduces ambiguity. If the prior probability is 10%, and the posterior is 15%, did the probability increase 5% (15% minus 10%) or 50% (15% divided by 10%)?
$endgroup$
– Acccumulation
34 mins ago
add a comment |
$begingroup$
I think another mathematical way of putting it may help. Consider the claim in the context of Bayes' rule:
Claim: if $P(B|A)>P(B)$ then $P(A|B) > P(A)$
Bayes' rule:
$$ P(A mid B) = fracP(B mid A) , P(A)P(B) $$
assuming $P(B)$ nonzero. Thus
$$fracB)P(A) = fracP(BP(B)$$
If $P(B|A)>P(B)$, then $fracP(BP(B) > 1$.
Then $fracB)P(A) > 1$, and so $P(A|B) > P(A)$.
This proves the claim and an even stronger conclusion - that the respective proportions of the likelihoods must be equal.
$endgroup$
$begingroup$
I liked this because it shows the stronger link "if A makes B x percent more likely, then B makes A x percent more likely"
$endgroup$
– probabilityislogic
9 hours ago
$begingroup$
@probabilityislogic Phrasing it that way introduces ambiguity. If the prior probability is 10%, and the posterior is 15%, did the probability increase 5% (15% minus 10%) or 50% (15% divided by 10%)?
$endgroup$
– Acccumulation
34 mins ago
add a comment |
$begingroup$
I think another mathematical way of putting it may help. Consider the claim in the context of Bayes' rule:
Claim: if $P(B|A)>P(B)$ then $P(A|B) > P(A)$
Bayes' rule:
$$ P(A mid B) = fracP(B mid A) , P(A)P(B) $$
assuming $P(B)$ nonzero. Thus
$$fracB)P(A) = fracP(BP(B)$$
If $P(B|A)>P(B)$, then $fracP(BP(B) > 1$.
Then $fracB)P(A) > 1$, and so $P(A|B) > P(A)$.
This proves the claim and an even stronger conclusion - that the respective proportions of the likelihoods must be equal.
$endgroup$
I think another mathematical way of putting it may help. Consider the claim in the context of Bayes' rule:
Claim: if $P(B|A)>P(B)$ then $P(A|B) > P(A)$
Bayes' rule:
$$ P(A mid B) = fracP(B mid A) , P(A)P(B) $$
assuming $P(B)$ nonzero. Thus
$$fracB)P(A) = fracP(BP(B)$$
If $P(B|A)>P(B)$, then $fracP(BP(B) > 1$.
Then $fracB)P(A) > 1$, and so $P(A|B) > P(A)$.
This proves the claim and an even stronger conclusion - that the respective proportions of the likelihoods must be equal.
edited yesterday
answered yesterday
Aaron HallAaron Hall
323215
323215
$begingroup$
I liked this because it shows the stronger link "if A makes B x percent more likely, then B makes A x percent more likely"
$endgroup$
– probabilityislogic
9 hours ago
$begingroup$
@probabilityislogic Phrasing it that way introduces ambiguity. If the prior probability is 10%, and the posterior is 15%, did the probability increase 5% (15% minus 10%) or 50% (15% divided by 10%)?
$endgroup$
– Acccumulation
34 mins ago
add a comment |
$begingroup$
I liked this because it shows the stronger link "if A makes B x percent more likely, then B makes A x percent more likely"
$endgroup$
– probabilityislogic
9 hours ago
$begingroup$
@probabilityislogic Phrasing it that way introduces ambiguity. If the prior probability is 10%, and the posterior is 15%, did the probability increase 5% (15% minus 10%) or 50% (15% divided by 10%)?
$endgroup$
– Acccumulation
34 mins ago
$begingroup$
I liked this because it shows the stronger link "if A makes B x percent more likely, then B makes A x percent more likely"
$endgroup$
– probabilityislogic
9 hours ago
$begingroup$
I liked this because it shows the stronger link "if A makes B x percent more likely, then B makes A x percent more likely"
$endgroup$
– probabilityislogic
9 hours ago
$begingroup$
@probabilityislogic Phrasing it that way introduces ambiguity. If the prior probability is 10%, and the posterior is 15%, did the probability increase 5% (15% minus 10%) or 50% (15% divided by 10%)?
$endgroup$
– Acccumulation
34 mins ago
$begingroup$
@probabilityislogic Phrasing it that way introduces ambiguity. If the prior probability is 10%, and the posterior is 15%, did the probability increase 5% (15% minus 10%) or 50% (15% divided by 10%)?
$endgroup$
– Acccumulation
34 mins ago
add a comment |
$begingroup$
Well, I don't like the word "makes" in the question. That implies some sort of causality and causality usually doesn't reverse.
But you asked for intuition. So, I'd think about some examples, because that seems to spark intuition. Choose one you like:
If a person is a woman, it is more likely that the person voted for a Democrat.
If a person voted for a Democrat, it is more likely that the person is a woman.
If a man is a professional basketball center, it is more likely that he is over 2 meters tall.
If a man is over 2 meters tall, it is more likely that he is a basketball center.
If it is over 40 degrees Celsius, it is more likely that there will be a blackout.
If there has been a blackout, it is more likely that it is over 40 degrees.
And so on.
$endgroup$
4
$begingroup$
That's not about probability. That's about 1 to 1 relationships.
$endgroup$
– Peter Flom♦
yesterday
6
$begingroup$
@jww Imagine the statement "if it is raining, the street is wet" (and suppose that's a valid implication for the moment, while the converse is not). Now take a large number of "samples" in different times and places, where you record whether it's raining and whether the street is wet. The street will be wet in more of the samples where it's raining than the samples where it's not; but also, it will be raining in more of the samples where the street is wet than the samples where the street is dry. That's probability.
$endgroup$
– hobbs
yesterday
2
$begingroup$
Both phenomena are caused by the same implication; the implication only works one way, but observing the consequent makes it more likely that you're looking at a sample where the antecedent is true.
$endgroup$
– hobbs
yesterday
5
$begingroup$
@Barmar Sorry, but that partly demonstrates the correctness of my logic. Because say 36/25,000 is a whole lot higher than 1/150,000,000.
$endgroup$
– Peter Flom♦
21 hours ago
5
$begingroup$
More likely than someone who is less than 2 meters tall.
$endgroup$
– Peter Flom♦
21 hours ago
|
show 7 more comments
$begingroup$
Well, I don't like the word "makes" in the question. That implies some sort of causality and causality usually doesn't reverse.
But you asked for intuition. So, I'd think about some examples, because that seems to spark intuition. Choose one you like:
If a person is a woman, it is more likely that the person voted for a Democrat.
If a person voted for a Democrat, it is more likely that the person is a woman.
If a man is a professional basketball center, it is more likely that he is over 2 meters tall.
If a man is over 2 meters tall, it is more likely that he is a basketball center.
If it is over 40 degrees Celsius, it is more likely that there will be a blackout.
If there has been a blackout, it is more likely that it is over 40 degrees.
And so on.
$endgroup$
4
$begingroup$
That's not about probability. That's about 1 to 1 relationships.
$endgroup$
– Peter Flom♦
yesterday
6
$begingroup$
@jww Imagine the statement "if it is raining, the street is wet" (and suppose that's a valid implication for the moment, while the converse is not). Now take a large number of "samples" in different times and places, where you record whether it's raining and whether the street is wet. The street will be wet in more of the samples where it's raining than the samples where it's not; but also, it will be raining in more of the samples where the street is wet than the samples where the street is dry. That's probability.
$endgroup$
– hobbs
yesterday
2
$begingroup$
Both phenomena are caused by the same implication; the implication only works one way, but observing the consequent makes it more likely that you're looking at a sample where the antecedent is true.
$endgroup$
– hobbs
yesterday
5
$begingroup$
@Barmar Sorry, but that partly demonstrates the correctness of my logic. Because say 36/25,000 is a whole lot higher than 1/150,000,000.
$endgroup$
– Peter Flom♦
21 hours ago
5
$begingroup$
More likely than someone who is less than 2 meters tall.
$endgroup$
– Peter Flom♦
21 hours ago
|
show 7 more comments
$begingroup$
Well, I don't like the word "makes" in the question. That implies some sort of causality and causality usually doesn't reverse.
But you asked for intuition. So, I'd think about some examples, because that seems to spark intuition. Choose one you like:
If a person is a woman, it is more likely that the person voted for a Democrat.
If a person voted for a Democrat, it is more likely that the person is a woman.
If a man is a professional basketball center, it is more likely that he is over 2 meters tall.
If a man is over 2 meters tall, it is more likely that he is a basketball center.
If it is over 40 degrees Celsius, it is more likely that there will be a blackout.
If there has been a blackout, it is more likely that it is over 40 degrees.
And so on.
$endgroup$
Well, I don't like the word "makes" in the question. That implies some sort of causality and causality usually doesn't reverse.
But you asked for intuition. So, I'd think about some examples, because that seems to spark intuition. Choose one you like:
If a person is a woman, it is more likely that the person voted for a Democrat.
If a person voted for a Democrat, it is more likely that the person is a woman.
If a man is a professional basketball center, it is more likely that he is over 2 meters tall.
If a man is over 2 meters tall, it is more likely that he is a basketball center.
If it is over 40 degrees Celsius, it is more likely that there will be a blackout.
If there has been a blackout, it is more likely that it is over 40 degrees.
And so on.
edited yesterday
answered yesterday
Peter Flom♦Peter Flom
77.5k12110218
77.5k12110218
4
$begingroup$
That's not about probability. That's about 1 to 1 relationships.
$endgroup$
– Peter Flom♦
yesterday
6
$begingroup$
@jww Imagine the statement "if it is raining, the street is wet" (and suppose that's a valid implication for the moment, while the converse is not). Now take a large number of "samples" in different times and places, where you record whether it's raining and whether the street is wet. The street will be wet in more of the samples where it's raining than the samples where it's not; but also, it will be raining in more of the samples where the street is wet than the samples where the street is dry. That's probability.
$endgroup$
– hobbs
yesterday
2
$begingroup$
Both phenomena are caused by the same implication; the implication only works one way, but observing the consequent makes it more likely that you're looking at a sample where the antecedent is true.
$endgroup$
– hobbs
yesterday
5
$begingroup$
@Barmar Sorry, but that partly demonstrates the correctness of my logic. Because say 36/25,000 is a whole lot higher than 1/150,000,000.
$endgroup$
– Peter Flom♦
21 hours ago
5
$begingroup$
More likely than someone who is less than 2 meters tall.
$endgroup$
– Peter Flom♦
21 hours ago
|
show 7 more comments
4
$begingroup$
That's not about probability. That's about 1 to 1 relationships.
$endgroup$
– Peter Flom♦
yesterday
6
$begingroup$
@jww Imagine the statement "if it is raining, the street is wet" (and suppose that's a valid implication for the moment, while the converse is not). Now take a large number of "samples" in different times and places, where you record whether it's raining and whether the street is wet. The street will be wet in more of the samples where it's raining than the samples where it's not; but also, it will be raining in more of the samples where the street is wet than the samples where the street is dry. That's probability.
$endgroup$
– hobbs
yesterday
2
$begingroup$
Both phenomena are caused by the same implication; the implication only works one way, but observing the consequent makes it more likely that you're looking at a sample where the antecedent is true.
$endgroup$
– hobbs
yesterday
5
$begingroup$
@Barmar Sorry, but that partly demonstrates the correctness of my logic. Because say 36/25,000 is a whole lot higher than 1/150,000,000.
$endgroup$
– Peter Flom♦
21 hours ago
5
$begingroup$
More likely than someone who is less than 2 meters tall.
$endgroup$
– Peter Flom♦
21 hours ago
4
4
$begingroup$
That's not about probability. That's about 1 to 1 relationships.
$endgroup$
– Peter Flom♦
yesterday
$begingroup$
That's not about probability. That's about 1 to 1 relationships.
$endgroup$
– Peter Flom♦
yesterday
6
6
$begingroup$
@jww Imagine the statement "if it is raining, the street is wet" (and suppose that's a valid implication for the moment, while the converse is not). Now take a large number of "samples" in different times and places, where you record whether it's raining and whether the street is wet. The street will be wet in more of the samples where it's raining than the samples where it's not; but also, it will be raining in more of the samples where the street is wet than the samples where the street is dry. That's probability.
$endgroup$
– hobbs
yesterday
$begingroup$
@jww Imagine the statement "if it is raining, the street is wet" (and suppose that's a valid implication for the moment, while the converse is not). Now take a large number of "samples" in different times and places, where you record whether it's raining and whether the street is wet. The street will be wet in more of the samples where it's raining than the samples where it's not; but also, it will be raining in more of the samples where the street is wet than the samples where the street is dry. That's probability.
$endgroup$
– hobbs
yesterday
2
2
$begingroup$
Both phenomena are caused by the same implication; the implication only works one way, but observing the consequent makes it more likely that you're looking at a sample where the antecedent is true.
$endgroup$
– hobbs
yesterday
$begingroup$
Both phenomena are caused by the same implication; the implication only works one way, but observing the consequent makes it more likely that you're looking at a sample where the antecedent is true.
$endgroup$
– hobbs
yesterday
5
5
$begingroup$
@Barmar Sorry, but that partly demonstrates the correctness of my logic. Because say 36/25,000 is a whole lot higher than 1/150,000,000.
$endgroup$
– Peter Flom♦
21 hours ago
$begingroup$
@Barmar Sorry, but that partly demonstrates the correctness of my logic. Because say 36/25,000 is a whole lot higher than 1/150,000,000.
$endgroup$
– Peter Flom♦
21 hours ago
5
5
$begingroup$
More likely than someone who is less than 2 meters tall.
$endgroup$
– Peter Flom♦
21 hours ago
$begingroup$
More likely than someone who is less than 2 meters tall.
$endgroup$
– Peter Flom♦
21 hours ago
|
show 7 more comments
$begingroup$
To add on the answer by @Dasherman: What can it mean to say that two events are related, or maybe associated or correlated? Maybe we could for a definition compare the joint probability (Assuming $DeclareMathOperatorPmathbbP P(A)>0, P(B)>0$):
$$
eta(A,B)=fracP(A cap B)P(A) P(B)
$$
so if $eta$ is larger than one, $A$ and $B$ occurs together more often than under independence. Then we can say that $A$ and $B$ are positively related.
But now, using the definition of conditional probability, $fracP(A cap B)P(A) P(B)>1$ is an easy consequence of $P(B mid A) > P(B)$. But $fracP(A cap B)P(A) P(B)$ is completely symmetric in $A$ and $B$ (interchanging all occurrences of the symbol $A$ with $B$ and vice versa) leaves the same formulas, so is also equivalent with $P(A mid B) > P(A)$. That gives the result. So the intuition you ask for is that $eta(A,B)$ is symmetric in $A$ and $B$.
The answer by @gunes gave a practical example, and it is easy to make others the same way.
$endgroup$
add a comment |
$begingroup$
To add on the answer by @Dasherman: What can it mean to say that two events are related, or maybe associated or correlated? Maybe we could for a definition compare the joint probability (Assuming $DeclareMathOperatorPmathbbP P(A)>0, P(B)>0$):
$$
eta(A,B)=fracP(A cap B)P(A) P(B)
$$
so if $eta$ is larger than one, $A$ and $B$ occurs together more often than under independence. Then we can say that $A$ and $B$ are positively related.
But now, using the definition of conditional probability, $fracP(A cap B)P(A) P(B)>1$ is an easy consequence of $P(B mid A) > P(B)$. But $fracP(A cap B)P(A) P(B)$ is completely symmetric in $A$ and $B$ (interchanging all occurrences of the symbol $A$ with $B$ and vice versa) leaves the same formulas, so is also equivalent with $P(A mid B) > P(A)$. That gives the result. So the intuition you ask for is that $eta(A,B)$ is symmetric in $A$ and $B$.
The answer by @gunes gave a practical example, and it is easy to make others the same way.
$endgroup$
add a comment |
$begingroup$
To add on the answer by @Dasherman: What can it mean to say that two events are related, or maybe associated or correlated? Maybe we could for a definition compare the joint probability (Assuming $DeclareMathOperatorPmathbbP P(A)>0, P(B)>0$):
$$
eta(A,B)=fracP(A cap B)P(A) P(B)
$$
so if $eta$ is larger than one, $A$ and $B$ occurs together more often than under independence. Then we can say that $A$ and $B$ are positively related.
But now, using the definition of conditional probability, $fracP(A cap B)P(A) P(B)>1$ is an easy consequence of $P(B mid A) > P(B)$. But $fracP(A cap B)P(A) P(B)$ is completely symmetric in $A$ and $B$ (interchanging all occurrences of the symbol $A$ with $B$ and vice versa) leaves the same formulas, so is also equivalent with $P(A mid B) > P(A)$. That gives the result. So the intuition you ask for is that $eta(A,B)$ is symmetric in $A$ and $B$.
The answer by @gunes gave a practical example, and it is easy to make others the same way.
$endgroup$
To add on the answer by @Dasherman: What can it mean to say that two events are related, or maybe associated or correlated? Maybe we could for a definition compare the joint probability (Assuming $DeclareMathOperatorPmathbbP P(A)>0, P(B)>0$):
$$
eta(A,B)=fracP(A cap B)P(A) P(B)
$$
so if $eta$ is larger than one, $A$ and $B$ occurs together more often than under independence. Then we can say that $A$ and $B$ are positively related.
But now, using the definition of conditional probability, $fracP(A cap B)P(A) P(B)>1$ is an easy consequence of $P(B mid A) > P(B)$. But $fracP(A cap B)P(A) P(B)$ is completely symmetric in $A$ and $B$ (interchanging all occurrences of the symbol $A$ with $B$ and vice versa) leaves the same formulas, so is also equivalent with $P(A mid B) > P(A)$. That gives the result. So the intuition you ask for is that $eta(A,B)$ is symmetric in $A$ and $B$.
The answer by @gunes gave a practical example, and it is easy to make others the same way.
edited yesterday
answered yesterday
kjetil b halvorsenkjetil b halvorsen
32.4k985241
32.4k985241
add a comment |
add a comment |
$begingroup$
By way of intuition, real world examples such as Peter Flom gives are most helpful for some people. The other thing that commonly helps people is pictures. So, to cover most bases, let's have some pictures.
What we have here are two very basic diagrams showing probabilities. The first shows two independent predicates I'll call Red and Plain. It is clear that they are independent because the lines line up. The proportion of plain area that is red is the same as the proportion of stripy area that is red and is also the same as the total proportion that is red.
In the second image, we have non-independent distributions. Specifically, we have expanded some of the plain red area into the stripy area without changing the fact that it is red. Clearly then, being red makes being plain more likely.
Meanwhile, have a look at the plain side of that image. Clearly the proportion of the plain region that is red is greater than the proportion of the whole image that is red. That is because the plain region has been given a bunch more area and all of it is red.
So, red makes plain more likely, and plain makes red more likely.
What's actually happening here? A is evidence for B (that is, A makes B more likely) when the area that contains both A and B is bigger than would be predicted if they were independent. Because the intersection between A and B is the same as the intersection between B and A, that also implies that B is evidence for A.
One note of caution: although the argument above seems very symmetrical, it may not be the case that the strength of evidence in both directions is equal. For example, consider this third image.
Here the same thing has happened: plain red has eaten up territory previously belonging to stripy red. In fact, it has completely finished the job!
Note that the point being red outright guarantees plainness because there are no stripy red regions left. However a point being plain has not guaranteed redness, because there are still green regions left. Nevertheless, a point in the box being plain increases the chance that it is red, and a point being red increases the chance that it is plain. Both directions imply more likely, just not by the same amount.
New contributor
$endgroup$
$begingroup$
I like the images :) However It looks like either the images or the explanation is flipped:In the second image, we have non-independent distributions. Specifically, we have moved some of the stripy red area into the plain area without changing the fact that it is red. Clearly then, being red makes being plain more likely.
-- your second image has gained plain area than the first, so going from image 1 to 2 we have moved plain area into the striped area.
$endgroup$
– Pod
9 hours ago
$begingroup$
So, if I have venn diagram with some common A,B intersection area and all I do is increase that intersection area, I automatically add more A,B for the entire space(without making the space larger) and change/increase n(A)/n(S) and n(B)/n(S) as a consequence. Right? More comments?
$endgroup$
– Rahul Deora
7 hours ago
add a comment |
$begingroup$
By way of intuition, real world examples such as Peter Flom gives are most helpful for some people. The other thing that commonly helps people is pictures. So, to cover most bases, let's have some pictures.
What we have here are two very basic diagrams showing probabilities. The first shows two independent predicates I'll call Red and Plain. It is clear that they are independent because the lines line up. The proportion of plain area that is red is the same as the proportion of stripy area that is red and is also the same as the total proportion that is red.
In the second image, we have non-independent distributions. Specifically, we have expanded some of the plain red area into the stripy area without changing the fact that it is red. Clearly then, being red makes being plain more likely.
Meanwhile, have a look at the plain side of that image. Clearly the proportion of the plain region that is red is greater than the proportion of the whole image that is red. That is because the plain region has been given a bunch more area and all of it is red.
So, red makes plain more likely, and plain makes red more likely.
What's actually happening here? A is evidence for B (that is, A makes B more likely) when the area that contains both A and B is bigger than would be predicted if they were independent. Because the intersection between A and B is the same as the intersection between B and A, that also implies that B is evidence for A.
One note of caution: although the argument above seems very symmetrical, it may not be the case that the strength of evidence in both directions is equal. For example, consider this third image.
Here the same thing has happened: plain red has eaten up territory previously belonging to stripy red. In fact, it has completely finished the job!
Note that the point being red outright guarantees plainness because there are no stripy red regions left. However a point being plain has not guaranteed redness, because there are still green regions left. Nevertheless, a point in the box being plain increases the chance that it is red, and a point being red increases the chance that it is plain. Both directions imply more likely, just not by the same amount.
New contributor
$endgroup$
$begingroup$
I like the images :) However It looks like either the images or the explanation is flipped:In the second image, we have non-independent distributions. Specifically, we have moved some of the stripy red area into the plain area without changing the fact that it is red. Clearly then, being red makes being plain more likely.
-- your second image has gained plain area than the first, so going from image 1 to 2 we have moved plain area into the striped area.
$endgroup$
– Pod
9 hours ago
$begingroup$
So, if I have venn diagram with some common A,B intersection area and all I do is increase that intersection area, I automatically add more A,B for the entire space(without making the space larger) and change/increase n(A)/n(S) and n(B)/n(S) as a consequence. Right? More comments?
$endgroup$
– Rahul Deora
7 hours ago
add a comment |
$begingroup$
By way of intuition, real world examples such as Peter Flom gives are most helpful for some people. The other thing that commonly helps people is pictures. So, to cover most bases, let's have some pictures.
What we have here are two very basic diagrams showing probabilities. The first shows two independent predicates I'll call Red and Plain. It is clear that they are independent because the lines line up. The proportion of plain area that is red is the same as the proportion of stripy area that is red and is also the same as the total proportion that is red.
In the second image, we have non-independent distributions. Specifically, we have expanded some of the plain red area into the stripy area without changing the fact that it is red. Clearly then, being red makes being plain more likely.
Meanwhile, have a look at the plain side of that image. Clearly the proportion of the plain region that is red is greater than the proportion of the whole image that is red. That is because the plain region has been given a bunch more area and all of it is red.
So, red makes plain more likely, and plain makes red more likely.
What's actually happening here? A is evidence for B (that is, A makes B more likely) when the area that contains both A and B is bigger than would be predicted if they were independent. Because the intersection between A and B is the same as the intersection between B and A, that also implies that B is evidence for A.
One note of caution: although the argument above seems very symmetrical, it may not be the case that the strength of evidence in both directions is equal. For example, consider this third image.
Here the same thing has happened: plain red has eaten up territory previously belonging to stripy red. In fact, it has completely finished the job!
Note that the point being red outright guarantees plainness because there are no stripy red regions left. However a point being plain has not guaranteed redness, because there are still green regions left. Nevertheless, a point in the box being plain increases the chance that it is red, and a point being red increases the chance that it is plain. Both directions imply more likely, just not by the same amount.
New contributor
$endgroup$
By way of intuition, real world examples such as Peter Flom gives are most helpful for some people. The other thing that commonly helps people is pictures. So, to cover most bases, let's have some pictures.
What we have here are two very basic diagrams showing probabilities. The first shows two independent predicates I'll call Red and Plain. It is clear that they are independent because the lines line up. The proportion of plain area that is red is the same as the proportion of stripy area that is red and is also the same as the total proportion that is red.
In the second image, we have non-independent distributions. Specifically, we have expanded some of the plain red area into the stripy area without changing the fact that it is red. Clearly then, being red makes being plain more likely.
Meanwhile, have a look at the plain side of that image. Clearly the proportion of the plain region that is red is greater than the proportion of the whole image that is red. That is because the plain region has been given a bunch more area and all of it is red.
So, red makes plain more likely, and plain makes red more likely.
What's actually happening here? A is evidence for B (that is, A makes B more likely) when the area that contains both A and B is bigger than would be predicted if they were independent. Because the intersection between A and B is the same as the intersection between B and A, that also implies that B is evidence for A.
One note of caution: although the argument above seems very symmetrical, it may not be the case that the strength of evidence in both directions is equal. For example, consider this third image.
Here the same thing has happened: plain red has eaten up territory previously belonging to stripy red. In fact, it has completely finished the job!
Note that the point being red outright guarantees plainness because there are no stripy red regions left. However a point being plain has not guaranteed redness, because there are still green regions left. Nevertheless, a point in the box being plain increases the chance that it is red, and a point being red increases the chance that it is plain. Both directions imply more likely, just not by the same amount.
New contributor
edited 7 hours ago
New contributor
answered 22 hours ago
JosiahJosiah
1713
1713
New contributor
New contributor
$begingroup$
I like the images :) However It looks like either the images or the explanation is flipped:In the second image, we have non-independent distributions. Specifically, we have moved some of the stripy red area into the plain area without changing the fact that it is red. Clearly then, being red makes being plain more likely.
-- your second image has gained plain area than the first, so going from image 1 to 2 we have moved plain area into the striped area.
$endgroup$
– Pod
9 hours ago
$begingroup$
So, if I have venn diagram with some common A,B intersection area and all I do is increase that intersection area, I automatically add more A,B for the entire space(without making the space larger) and change/increase n(A)/n(S) and n(B)/n(S) as a consequence. Right? More comments?
$endgroup$
– Rahul Deora
7 hours ago
add a comment |
$begingroup$
I like the images :) However It looks like either the images or the explanation is flipped:In the second image, we have non-independent distributions. Specifically, we have moved some of the stripy red area into the plain area without changing the fact that it is red. Clearly then, being red makes being plain more likely.
-- your second image has gained plain area than the first, so going from image 1 to 2 we have moved plain area into the striped area.
$endgroup$
– Pod
9 hours ago
$begingroup$
So, if I have venn diagram with some common A,B intersection area and all I do is increase that intersection area, I automatically add more A,B for the entire space(without making the space larger) and change/increase n(A)/n(S) and n(B)/n(S) as a consequence. Right? More comments?
$endgroup$
– Rahul Deora
7 hours ago
$begingroup$
I like the images :) However It looks like either the images or the explanation is flipped:
In the second image, we have non-independent distributions. Specifically, we have moved some of the stripy red area into the plain area without changing the fact that it is red. Clearly then, being red makes being plain more likely.
-- your second image has gained plain area than the first, so going from image 1 to 2 we have moved plain area into the striped area.$endgroup$
– Pod
9 hours ago
$begingroup$
I like the images :) However It looks like either the images or the explanation is flipped:
In the second image, we have non-independent distributions. Specifically, we have moved some of the stripy red area into the plain area without changing the fact that it is red. Clearly then, being red makes being plain more likely.
-- your second image has gained plain area than the first, so going from image 1 to 2 we have moved plain area into the striped area.$endgroup$
– Pod
9 hours ago
$begingroup$
So, if I have venn diagram with some common A,B intersection area and all I do is increase that intersection area, I automatically add more A,B for the entire space(without making the space larger) and change/increase n(A)/n(S) and n(B)/n(S) as a consequence. Right? More comments?
$endgroup$
– Rahul Deora
7 hours ago
$begingroup$
So, if I have venn diagram with some common A,B intersection area and all I do is increase that intersection area, I automatically add more A,B for the entire space(without making the space larger) and change/increase n(A)/n(S) and n(B)/n(S) as a consequence. Right? More comments?
$endgroup$
– Rahul Deora
7 hours ago
add a comment |
$begingroup$
If A makes B more likely, this means the events are somehow related. This relation works both ways.
If A makes B more likely, this means that A and B tend to happen together. This then means that B also makes A more likely.
$endgroup$
1
$begingroup$
This perhaps could use some expansion? Without a definition of related it is a bit empty.
$endgroup$
– mdewey
yesterday
2
$begingroup$
I was trying to stay away from anything rigorous, since OP asked for an intuitive explanation. You are right that it is quite empty as it is now, but I'm not sure how to expand it in an intuitive way. I have added an attempt.
$endgroup$
– Dasherman
yesterday
add a comment |
$begingroup$
If A makes B more likely, this means the events are somehow related. This relation works both ways.
If A makes B more likely, this means that A and B tend to happen together. This then means that B also makes A more likely.
$endgroup$
1
$begingroup$
This perhaps could use some expansion? Without a definition of related it is a bit empty.
$endgroup$
– mdewey
yesterday
2
$begingroup$
I was trying to stay away from anything rigorous, since OP asked for an intuitive explanation. You are right that it is quite empty as it is now, but I'm not sure how to expand it in an intuitive way. I have added an attempt.
$endgroup$
– Dasherman
yesterday
add a comment |
$begingroup$
If A makes B more likely, this means the events are somehow related. This relation works both ways.
If A makes B more likely, this means that A and B tend to happen together. This then means that B also makes A more likely.
$endgroup$
If A makes B more likely, this means the events are somehow related. This relation works both ways.
If A makes B more likely, this means that A and B tend to happen together. This then means that B also makes A more likely.
edited yesterday
answered yesterday
DashermanDasherman
20116
20116
1
$begingroup$
This perhaps could use some expansion? Without a definition of related it is a bit empty.
$endgroup$
– mdewey
yesterday
2
$begingroup$
I was trying to stay away from anything rigorous, since OP asked for an intuitive explanation. You are right that it is quite empty as it is now, but I'm not sure how to expand it in an intuitive way. I have added an attempt.
$endgroup$
– Dasherman
yesterday
add a comment |
1
$begingroup$
This perhaps could use some expansion? Without a definition of related it is a bit empty.
$endgroup$
– mdewey
yesterday
2
$begingroup$
I was trying to stay away from anything rigorous, since OP asked for an intuitive explanation. You are right that it is quite empty as it is now, but I'm not sure how to expand it in an intuitive way. I have added an attempt.
$endgroup$
– Dasherman
yesterday
1
1
$begingroup$
This perhaps could use some expansion? Without a definition of related it is a bit empty.
$endgroup$
– mdewey
yesterday
$begingroup$
This perhaps could use some expansion? Without a definition of related it is a bit empty.
$endgroup$
– mdewey
yesterday
2
2
$begingroup$
I was trying to stay away from anything rigorous, since OP asked for an intuitive explanation. You are right that it is quite empty as it is now, but I'm not sure how to expand it in an intuitive way. I have added an attempt.
$endgroup$
– Dasherman
yesterday
$begingroup$
I was trying to stay away from anything rigorous, since OP asked for an intuitive explanation. You are right that it is quite empty as it is now, but I'm not sure how to expand it in an intuitive way. I have added an attempt.
$endgroup$
– Dasherman
yesterday
add a comment |
$begingroup$
If A makes B more likely, A has crucial information that B can infer about itself. Despite the fact that it might not contribute the same amount, that information is not lost the other way around. Eventually, we have two events that their occurrence support each other. I can’t seem to imagine a scenario where occurrence of A increases the likelihood of B, and occurrence of B decreases the likelihood of A. For example, if it rains, the floor will be wet with high probability, and if the floor is wet, it doesn’t mean that it rained but it doesn’t decrease the chances.
$endgroup$
add a comment |
$begingroup$
If A makes B more likely, A has crucial information that B can infer about itself. Despite the fact that it might not contribute the same amount, that information is not lost the other way around. Eventually, we have two events that their occurrence support each other. I can’t seem to imagine a scenario where occurrence of A increases the likelihood of B, and occurrence of B decreases the likelihood of A. For example, if it rains, the floor will be wet with high probability, and if the floor is wet, it doesn’t mean that it rained but it doesn’t decrease the chances.
$endgroup$
add a comment |
$begingroup$
If A makes B more likely, A has crucial information that B can infer about itself. Despite the fact that it might not contribute the same amount, that information is not lost the other way around. Eventually, we have two events that their occurrence support each other. I can’t seem to imagine a scenario where occurrence of A increases the likelihood of B, and occurrence of B decreases the likelihood of A. For example, if it rains, the floor will be wet with high probability, and if the floor is wet, it doesn’t mean that it rained but it doesn’t decrease the chances.
$endgroup$
If A makes B more likely, A has crucial information that B can infer about itself. Despite the fact that it might not contribute the same amount, that information is not lost the other way around. Eventually, we have two events that their occurrence support each other. I can’t seem to imagine a scenario where occurrence of A increases the likelihood of B, and occurrence of B decreases the likelihood of A. For example, if it rains, the floor will be wet with high probability, and if the floor is wet, it doesn’t mean that it rained but it doesn’t decrease the chances.
edited yesterday
answered yesterday
gunesgunes
7,5911316
7,5911316
add a comment |
add a comment |
$begingroup$
You can make the math more intuitive by imagining a contingency table.
$beginarraycc
beginarraycc
&& A & lnot A & \
&a+b+c+d & a+c & b+d \hline
B& a+b& a & b \
lnot B & c+d& c & d \
endarray
endarray$
When $A$ and $B$ are independent then the joint probabilities are products of the marginal probabilities $$beginarraycc
beginarraycc
&& A & lnot A & \
&1 & x & 1-x \hline
B& y& a=xy & b=(1-x)y \
lnot B & 1-y& c=x (1-y) & d=(1-x)(1-y)\
endarray
endarray$$ In such case you would have similar marginal and conditional probabilities, e.g. $P (A) = P (A|B) $ and $P (B)=P (B|A) $.When there is no independence then you could see this as leaving the parameters $a,b,c,d $ the same (as products of the margins) but with just an adjustment by $pm z $ $$beginarraycc
beginarraycc
&& A & lnot A & \
&1 & x & 1-x \hline
B& y& a+z & b-z \
lnot B & 1-y& c-z & d+z\
endarray
endarray$$You could see this $z$ as breaking the equality of the marginal and conditional probabilities or breaking the relationship for the joint probabilities being products of the marginal probabilities.
Now, from this point of view (of breaking these equalities) you can see that this breaking happens in two ways both for $P(A|B) neq P(A)$ and $P(B|A) neq P(B)$. And the inequality will be for both cases $>$ when $z$ is positive and $<$ when $z $ is negative.
So you could see the connection $P(A|B) > P(A)$ then $P(B|A) > P(B)$ via the joint probability $P(B,A) > P (A) P (B) $.
If A and B often happen together (joint probability is higher then product of marginal probabilities) then observing the one will make the (conditional) probability of the other higher.
$endgroup$
add a comment |
$begingroup$
You can make the math more intuitive by imagining a contingency table.
$beginarraycc
beginarraycc
&& A & lnot A & \
&a+b+c+d & a+c & b+d \hline
B& a+b& a & b \
lnot B & c+d& c & d \
endarray
endarray$
When $A$ and $B$ are independent then the joint probabilities are products of the marginal probabilities $$beginarraycc
beginarraycc
&& A & lnot A & \
&1 & x & 1-x \hline
B& y& a=xy & b=(1-x)y \
lnot B & 1-y& c=x (1-y) & d=(1-x)(1-y)\
endarray
endarray$$ In such case you would have similar marginal and conditional probabilities, e.g. $P (A) = P (A|B) $ and $P (B)=P (B|A) $.When there is no independence then you could see this as leaving the parameters $a,b,c,d $ the same (as products of the margins) but with just an adjustment by $pm z $ $$beginarraycc
beginarraycc
&& A & lnot A & \
&1 & x & 1-x \hline
B& y& a+z & b-z \
lnot B & 1-y& c-z & d+z\
endarray
endarray$$You could see this $z$ as breaking the equality of the marginal and conditional probabilities or breaking the relationship for the joint probabilities being products of the marginal probabilities.
Now, from this point of view (of breaking these equalities) you can see that this breaking happens in two ways both for $P(A|B) neq P(A)$ and $P(B|A) neq P(B)$. And the inequality will be for both cases $>$ when $z$ is positive and $<$ when $z $ is negative.
So you could see the connection $P(A|B) > P(A)$ then $P(B|A) > P(B)$ via the joint probability $P(B,A) > P (A) P (B) $.
If A and B often happen together (joint probability is higher then product of marginal probabilities) then observing the one will make the (conditional) probability of the other higher.
$endgroup$
add a comment |
$begingroup$
You can make the math more intuitive by imagining a contingency table.
$beginarraycc
beginarraycc
&& A & lnot A & \
&a+b+c+d & a+c & b+d \hline
B& a+b& a & b \
lnot B & c+d& c & d \
endarray
endarray$
When $A$ and $B$ are independent then the joint probabilities are products of the marginal probabilities $$beginarraycc
beginarraycc
&& A & lnot A & \
&1 & x & 1-x \hline
B& y& a=xy & b=(1-x)y \
lnot B & 1-y& c=x (1-y) & d=(1-x)(1-y)\
endarray
endarray$$ In such case you would have similar marginal and conditional probabilities, e.g. $P (A) = P (A|B) $ and $P (B)=P (B|A) $.When there is no independence then you could see this as leaving the parameters $a,b,c,d $ the same (as products of the margins) but with just an adjustment by $pm z $ $$beginarraycc
beginarraycc
&& A & lnot A & \
&1 & x & 1-x \hline
B& y& a+z & b-z \
lnot B & 1-y& c-z & d+z\
endarray
endarray$$You could see this $z$ as breaking the equality of the marginal and conditional probabilities or breaking the relationship for the joint probabilities being products of the marginal probabilities.
Now, from this point of view (of breaking these equalities) you can see that this breaking happens in two ways both for $P(A|B) neq P(A)$ and $P(B|A) neq P(B)$. And the inequality will be for both cases $>$ when $z$ is positive and $<$ when $z $ is negative.
So you could see the connection $P(A|B) > P(A)$ then $P(B|A) > P(B)$ via the joint probability $P(B,A) > P (A) P (B) $.
If A and B often happen together (joint probability is higher then product of marginal probabilities) then observing the one will make the (conditional) probability of the other higher.
$endgroup$
You can make the math more intuitive by imagining a contingency table.
$beginarraycc
beginarraycc
&& A & lnot A & \
&a+b+c+d & a+c & b+d \hline
B& a+b& a & b \
lnot B & c+d& c & d \
endarray
endarray$
When $A$ and $B$ are independent then the joint probabilities are products of the marginal probabilities $$beginarraycc
beginarraycc
&& A & lnot A & \
&1 & x & 1-x \hline
B& y& a=xy & b=(1-x)y \
lnot B & 1-y& c=x (1-y) & d=(1-x)(1-y)\
endarray
endarray$$ In such case you would have similar marginal and conditional probabilities, e.g. $P (A) = P (A|B) $ and $P (B)=P (B|A) $.When there is no independence then you could see this as leaving the parameters $a,b,c,d $ the same (as products of the margins) but with just an adjustment by $pm z $ $$beginarraycc
beginarraycc
&& A & lnot A & \
&1 & x & 1-x \hline
B& y& a+z & b-z \
lnot B & 1-y& c-z & d+z\
endarray
endarray$$You could see this $z$ as breaking the equality of the marginal and conditional probabilities or breaking the relationship for the joint probabilities being products of the marginal probabilities.
Now, from this point of view (of breaking these equalities) you can see that this breaking happens in two ways both for $P(A|B) neq P(A)$ and $P(B|A) neq P(B)$. And the inequality will be for both cases $>$ when $z$ is positive and $<$ when $z $ is negative.
So you could see the connection $P(A|B) > P(A)$ then $P(B|A) > P(B)$ via the joint probability $P(B,A) > P (A) P (B) $.
If A and B often happen together (joint probability is higher then product of marginal probabilities) then observing the one will make the (conditional) probability of the other higher.
edited 23 hours ago
answered yesterday
Martijn WeteringsMartijn Weterings
14.8k2064
14.8k2064
add a comment |
add a comment |
$begingroup$
You are told that Sam is a woman and Kim is a man, and one of the two wears make-up and the other does not. Who of them would you guess wears make-up?
You are told that Sam wears make-up and Kim doesn't, and one of the two is a man and one is a woman. Who would you guess is the woman?
$endgroup$
$begingroup$
It is not so straightforward to connect this to the original problem. What exactly is event A and what is event B? Here it seems more like some comparison of probabilities. Event A is 'x is a women' (not A is the event 'x is a man'). And event B is 'x wears makeup'. But now we suddenly have a Sam and a Kim, where does that come from and should we use anything of information about the subjective masculinity or femininity of their names?
$endgroup$
– Martijn Weterings
23 hours ago
add a comment |
$begingroup$
You are told that Sam is a woman and Kim is a man, and one of the two wears make-up and the other does not. Who of them would you guess wears make-up?
You are told that Sam wears make-up and Kim doesn't, and one of the two is a man and one is a woman. Who would you guess is the woman?
$endgroup$
$begingroup$
It is not so straightforward to connect this to the original problem. What exactly is event A and what is event B? Here it seems more like some comparison of probabilities. Event A is 'x is a women' (not A is the event 'x is a man'). And event B is 'x wears makeup'. But now we suddenly have a Sam and a Kim, where does that come from and should we use anything of information about the subjective masculinity or femininity of their names?
$endgroup$
– Martijn Weterings
23 hours ago
add a comment |
$begingroup$
You are told that Sam is a woman and Kim is a man, and one of the two wears make-up and the other does not. Who of them would you guess wears make-up?
You are told that Sam wears make-up and Kim doesn't, and one of the two is a man and one is a woman. Who would you guess is the woman?
$endgroup$
You are told that Sam is a woman and Kim is a man, and one of the two wears make-up and the other does not. Who of them would you guess wears make-up?
You are told that Sam wears make-up and Kim doesn't, and one of the two is a man and one is a woman. Who would you guess is the woman?
answered yesterday
Hagen von EitzenHagen von Eitzen
1413
1413
$begingroup$
It is not so straightforward to connect this to the original problem. What exactly is event A and what is event B? Here it seems more like some comparison of probabilities. Event A is 'x is a women' (not A is the event 'x is a man'). And event B is 'x wears makeup'. But now we suddenly have a Sam and a Kim, where does that come from and should we use anything of information about the subjective masculinity or femininity of their names?
$endgroup$
– Martijn Weterings
23 hours ago
add a comment |
$begingroup$
It is not so straightforward to connect this to the original problem. What exactly is event A and what is event B? Here it seems more like some comparison of probabilities. Event A is 'x is a women' (not A is the event 'x is a man'). And event B is 'x wears makeup'. But now we suddenly have a Sam and a Kim, where does that come from and should we use anything of information about the subjective masculinity or femininity of their names?
$endgroup$
– Martijn Weterings
23 hours ago
$begingroup$
It is not so straightforward to connect this to the original problem. What exactly is event A and what is event B? Here it seems more like some comparison of probabilities. Event A is 'x is a women' (not A is the event 'x is a man'). And event B is 'x wears makeup'. But now we suddenly have a Sam and a Kim, where does that come from and should we use anything of information about the subjective masculinity or femininity of their names?
$endgroup$
– Martijn Weterings
23 hours ago
$begingroup$
It is not so straightforward to connect this to the original problem. What exactly is event A and what is event B? Here it seems more like some comparison of probabilities. Event A is 'x is a women' (not A is the event 'x is a man'). And event B is 'x wears makeup'. But now we suddenly have a Sam and a Kim, where does that come from and should we use anything of information about the subjective masculinity or femininity of their names?
$endgroup$
– Martijn Weterings
23 hours ago
add a comment |
$begingroup$
Suppose we denote the posterior-to-prior probability ratio of an event as:
$$Delta(A|B) equiv fracmathbbP(AmathbbP(A)$$
Then an alternative expression of Bayes' theorem (see this related post) is:
$$Delta(A|B)
= fracmathbbP(AmathbbP(A)
= fracmathbbP(A cap B)mathbbP(A) mathbbP(B)
= fracA)mathbbP(B)
= Delta(B|A).$$
The posterior-to-prior probability ratio tells us whether the argument event is made more or less likely by the occurrence of the conditioning event (and how much more or less likely). The above form of Bayes' theorem shows use that posterior-to-prior probability ratio is symmetric in the variables.$^dagger$ For example, if observing $B$ makes $A$ more likely than it was a priori, then observing $A$ makes $B$ more likely than it was a priori.
$^dagger$ Note that this is a probability rule, and so it should not be interpreted causally. This symmetry is true in a probabilistic sense for passive observation ---however, it is not true if you intervene in the system to change $A$ or $B$. In that latter case you would need to use causal operations (e.g., the $textdo$ operator) to find the effect of the change in the conditioning variable.
$endgroup$
add a comment |
$begingroup$
Suppose we denote the posterior-to-prior probability ratio of an event as:
$$Delta(A|B) equiv fracmathbbP(AmathbbP(A)$$
Then an alternative expression of Bayes' theorem (see this related post) is:
$$Delta(A|B)
= fracmathbbP(AmathbbP(A)
= fracmathbbP(A cap B)mathbbP(A) mathbbP(B)
= fracA)mathbbP(B)
= Delta(B|A).$$
The posterior-to-prior probability ratio tells us whether the argument event is made more or less likely by the occurrence of the conditioning event (and how much more or less likely). The above form of Bayes' theorem shows use that posterior-to-prior probability ratio is symmetric in the variables.$^dagger$ For example, if observing $B$ makes $A$ more likely than it was a priori, then observing $A$ makes $B$ more likely than it was a priori.
$^dagger$ Note that this is a probability rule, and so it should not be interpreted causally. This symmetry is true in a probabilistic sense for passive observation ---however, it is not true if you intervene in the system to change $A$ or $B$. In that latter case you would need to use causal operations (e.g., the $textdo$ operator) to find the effect of the change in the conditioning variable.
$endgroup$
add a comment |
$begingroup$
Suppose we denote the posterior-to-prior probability ratio of an event as:
$$Delta(A|B) equiv fracmathbbP(AmathbbP(A)$$
Then an alternative expression of Bayes' theorem (see this related post) is:
$$Delta(A|B)
= fracmathbbP(AmathbbP(A)
= fracmathbbP(A cap B)mathbbP(A) mathbbP(B)
= fracA)mathbbP(B)
= Delta(B|A).$$
The posterior-to-prior probability ratio tells us whether the argument event is made more or less likely by the occurrence of the conditioning event (and how much more or less likely). The above form of Bayes' theorem shows use that posterior-to-prior probability ratio is symmetric in the variables.$^dagger$ For example, if observing $B$ makes $A$ more likely than it was a priori, then observing $A$ makes $B$ more likely than it was a priori.
$^dagger$ Note that this is a probability rule, and so it should not be interpreted causally. This symmetry is true in a probabilistic sense for passive observation ---however, it is not true if you intervene in the system to change $A$ or $B$. In that latter case you would need to use causal operations (e.g., the $textdo$ operator) to find the effect of the change in the conditioning variable.
$endgroup$
Suppose we denote the posterior-to-prior probability ratio of an event as:
$$Delta(A|B) equiv fracmathbbP(AmathbbP(A)$$
Then an alternative expression of Bayes' theorem (see this related post) is:
$$Delta(A|B)
= fracmathbbP(AmathbbP(A)
= fracmathbbP(A cap B)mathbbP(A) mathbbP(B)
= fracA)mathbbP(B)
= Delta(B|A).$$
The posterior-to-prior probability ratio tells us whether the argument event is made more or less likely by the occurrence of the conditioning event (and how much more or less likely). The above form of Bayes' theorem shows use that posterior-to-prior probability ratio is symmetric in the variables.$^dagger$ For example, if observing $B$ makes $A$ more likely than it was a priori, then observing $A$ makes $B$ more likely than it was a priori.
$^dagger$ Note that this is a probability rule, and so it should not be interpreted causally. This symmetry is true in a probabilistic sense for passive observation ---however, it is not true if you intervene in the system to change $A$ or $B$. In that latter case you would need to use causal operations (e.g., the $textdo$ operator) to find the effect of the change in the conditioning variable.
answered 17 hours ago
BenBen
28.3k233129
28.3k233129
add a comment |
add a comment |
$begingroup$
It seems there is some confusion between causation and correlation. Indeed, the question statement is false for causation, as can be seen by an example such as:
- If a dog is wearing a scarf, then it is a domesticated animal.
The following is not true:
- Seeing a domesticated animal wearing a scarf implies it is a dog.
- Seeing a domesticated dog implies it is wearing a scarf.
However, if you are thinking of probabilities (correlation) then it IS true:
- Dogs wearing scarfs are much more likely to be a domesticated animal than dogs not wearing scarfs (or animals in general for that matter)
The following is true:
- A domesticated animal wearing a scarf is more likely to be a dog than another animal.
- A domesticated dog is more likely to be wearing a scarf than a non-domesticated dog.
If this is not intuitive, think of a pool of animals including ants, dogs and cats. Dogs and cats can both be domesticated and wear scarfs, ants can't neither.
- If you increase the probability of domesticated animals in your pool, it also will mean you will increase the chance of seeing an animal wearing a scarf.
- If you increase the probability of either cats or dogs, then you will also increase the probability of seeing an animal wearing a scarf.
Being domesticated is the "secret" link between the animal and wearing a scarf, and that "secret" link will exert its influence both ways.
Edit: Giving an example to your question in the comments:
Imagine a world where animals are either Cats or Dogs. They can be either domesticated or not. They can wear a scarf or not. Imagine there exist 100 total animals, 50 Dogs and 50 Cats.
Now consider the statement A to be: "Dogs wearing scarfs are thrice as likely to be a domesticated animal than dogs not wearing scarfs".
If A is not true, then you can imagine that the world could be made of 50 Dogs, 25 of them domesticated (of which 10 wear scarfs), 25 of them wild (of which 10 wear scarfs). Same stats for cats.
Then, if you saw a domesticated animal in this world, it would have 50% chance of being a dog (25/50, 25 dogs out of 50 domesticated animals) and 40% chance of having a scarf (20/50, 10 Dogs and 10 Cats out of 50 domesticated animals).
However, if A is true, then you have a world where there are 50 Dogs, 25 of them domesticated (of which 15 wear scarfs), 25 of them wild (of which 5 wear scarfs). Cats maintain the old stats: 50 Cats, 25 of them domesticated (of which 10 wear scarfs), 25 of them wild (of which 10 wear scarfs).
Then, if you saw a domesticated animal in this world, it would have the same 50% chance of being a dog (25/50, 25 dogs out of 50 domesticated animals) but would have 50% (25/50, 15 Dogs and 10 Cats out of 50 domesticated animals).
As you can see, if you say that A is true, then if you saw a domesticated animal wearing a scarf in the world, it would be more likely a Dog (60% or 15/25) than any other animal (in this case Cat, 40% or 10/25).
New contributor
$endgroup$
$begingroup$
This is the line I have a problem with "A domesticated animal wearing a scarf is more likely to be a dog than another animal." When we made our initial statement we did not make any claim on the other animals that could wear scarfs. There could be 100s. We only made a statement about dogs.
$endgroup$
– Rahul Deora
8 hours ago
$begingroup$
See if my edit helps with your particular issue.
$endgroup$
– H4uZ
7 hours ago
add a comment |
$begingroup$
It seems there is some confusion between causation and correlation. Indeed, the question statement is false for causation, as can be seen by an example such as:
- If a dog is wearing a scarf, then it is a domesticated animal.
The following is not true:
- Seeing a domesticated animal wearing a scarf implies it is a dog.
- Seeing a domesticated dog implies it is wearing a scarf.
However, if you are thinking of probabilities (correlation) then it IS true:
- Dogs wearing scarfs are much more likely to be a domesticated animal than dogs not wearing scarfs (or animals in general for that matter)
The following is true:
- A domesticated animal wearing a scarf is more likely to be a dog than another animal.
- A domesticated dog is more likely to be wearing a scarf than a non-domesticated dog.
If this is not intuitive, think of a pool of animals including ants, dogs and cats. Dogs and cats can both be domesticated and wear scarfs, ants can't neither.
- If you increase the probability of domesticated animals in your pool, it also will mean you will increase the chance of seeing an animal wearing a scarf.
- If you increase the probability of either cats or dogs, then you will also increase the probability of seeing an animal wearing a scarf.
Being domesticated is the "secret" link between the animal and wearing a scarf, and that "secret" link will exert its influence both ways.
Edit: Giving an example to your question in the comments:
Imagine a world where animals are either Cats or Dogs. They can be either domesticated or not. They can wear a scarf or not. Imagine there exist 100 total animals, 50 Dogs and 50 Cats.
Now consider the statement A to be: "Dogs wearing scarfs are thrice as likely to be a domesticated animal than dogs not wearing scarfs".
If A is not true, then you can imagine that the world could be made of 50 Dogs, 25 of them domesticated (of which 10 wear scarfs), 25 of them wild (of which 10 wear scarfs). Same stats for cats.
Then, if you saw a domesticated animal in this world, it would have 50% chance of being a dog (25/50, 25 dogs out of 50 domesticated animals) and 40% chance of having a scarf (20/50, 10 Dogs and 10 Cats out of 50 domesticated animals).
However, if A is true, then you have a world where there are 50 Dogs, 25 of them domesticated (of which 15 wear scarfs), 25 of them wild (of which 5 wear scarfs). Cats maintain the old stats: 50 Cats, 25 of them domesticated (of which 10 wear scarfs), 25 of them wild (of which 10 wear scarfs).
Then, if you saw a domesticated animal in this world, it would have the same 50% chance of being a dog (25/50, 25 dogs out of 50 domesticated animals) but would have 50% (25/50, 15 Dogs and 10 Cats out of 50 domesticated animals).
As you can see, if you say that A is true, then if you saw a domesticated animal wearing a scarf in the world, it would be more likely a Dog (60% or 15/25) than any other animal (in this case Cat, 40% or 10/25).
New contributor
$endgroup$
$begingroup$
This is the line I have a problem with "A domesticated animal wearing a scarf is more likely to be a dog than another animal." When we made our initial statement we did not make any claim on the other animals that could wear scarfs. There could be 100s. We only made a statement about dogs.
$endgroup$
– Rahul Deora
8 hours ago
$begingroup$
See if my edit helps with your particular issue.
$endgroup$
– H4uZ
7 hours ago
add a comment |
$begingroup$
It seems there is some confusion between causation and correlation. Indeed, the question statement is false for causation, as can be seen by an example such as:
- If a dog is wearing a scarf, then it is a domesticated animal.
The following is not true:
- Seeing a domesticated animal wearing a scarf implies it is a dog.
- Seeing a domesticated dog implies it is wearing a scarf.
However, if you are thinking of probabilities (correlation) then it IS true:
- Dogs wearing scarfs are much more likely to be a domesticated animal than dogs not wearing scarfs (or animals in general for that matter)
The following is true:
- A domesticated animal wearing a scarf is more likely to be a dog than another animal.
- A domesticated dog is more likely to be wearing a scarf than a non-domesticated dog.
If this is not intuitive, think of a pool of animals including ants, dogs and cats. Dogs and cats can both be domesticated and wear scarfs, ants can't neither.
- If you increase the probability of domesticated animals in your pool, it also will mean you will increase the chance of seeing an animal wearing a scarf.
- If you increase the probability of either cats or dogs, then you will also increase the probability of seeing an animal wearing a scarf.
Being domesticated is the "secret" link between the animal and wearing a scarf, and that "secret" link will exert its influence both ways.
Edit: Giving an example to your question in the comments:
Imagine a world where animals are either Cats or Dogs. They can be either domesticated or not. They can wear a scarf or not. Imagine there exist 100 total animals, 50 Dogs and 50 Cats.
Now consider the statement A to be: "Dogs wearing scarfs are thrice as likely to be a domesticated animal than dogs not wearing scarfs".
If A is not true, then you can imagine that the world could be made of 50 Dogs, 25 of them domesticated (of which 10 wear scarfs), 25 of them wild (of which 10 wear scarfs). Same stats for cats.
Then, if you saw a domesticated animal in this world, it would have 50% chance of being a dog (25/50, 25 dogs out of 50 domesticated animals) and 40% chance of having a scarf (20/50, 10 Dogs and 10 Cats out of 50 domesticated animals).
However, if A is true, then you have a world where there are 50 Dogs, 25 of them domesticated (of which 15 wear scarfs), 25 of them wild (of which 5 wear scarfs). Cats maintain the old stats: 50 Cats, 25 of them domesticated (of which 10 wear scarfs), 25 of them wild (of which 10 wear scarfs).
Then, if you saw a domesticated animal in this world, it would have the same 50% chance of being a dog (25/50, 25 dogs out of 50 domesticated animals) but would have 50% (25/50, 15 Dogs and 10 Cats out of 50 domesticated animals).
As you can see, if you say that A is true, then if you saw a domesticated animal wearing a scarf in the world, it would be more likely a Dog (60% or 15/25) than any other animal (in this case Cat, 40% or 10/25).
New contributor
$endgroup$
It seems there is some confusion between causation and correlation. Indeed, the question statement is false for causation, as can be seen by an example such as:
- If a dog is wearing a scarf, then it is a domesticated animal.
The following is not true:
- Seeing a domesticated animal wearing a scarf implies it is a dog.
- Seeing a domesticated dog implies it is wearing a scarf.
However, if you are thinking of probabilities (correlation) then it IS true:
- Dogs wearing scarfs are much more likely to be a domesticated animal than dogs not wearing scarfs (or animals in general for that matter)
The following is true:
- A domesticated animal wearing a scarf is more likely to be a dog than another animal.
- A domesticated dog is more likely to be wearing a scarf than a non-domesticated dog.
If this is not intuitive, think of a pool of animals including ants, dogs and cats. Dogs and cats can both be domesticated and wear scarfs, ants can't neither.
- If you increase the probability of domesticated animals in your pool, it also will mean you will increase the chance of seeing an animal wearing a scarf.
- If you increase the probability of either cats or dogs, then you will also increase the probability of seeing an animal wearing a scarf.
Being domesticated is the "secret" link between the animal and wearing a scarf, and that "secret" link will exert its influence both ways.
Edit: Giving an example to your question in the comments:
Imagine a world where animals are either Cats or Dogs. They can be either domesticated or not. They can wear a scarf or not. Imagine there exist 100 total animals, 50 Dogs and 50 Cats.
Now consider the statement A to be: "Dogs wearing scarfs are thrice as likely to be a domesticated animal than dogs not wearing scarfs".
If A is not true, then you can imagine that the world could be made of 50 Dogs, 25 of them domesticated (of which 10 wear scarfs), 25 of them wild (of which 10 wear scarfs). Same stats for cats.
Then, if you saw a domesticated animal in this world, it would have 50% chance of being a dog (25/50, 25 dogs out of 50 domesticated animals) and 40% chance of having a scarf (20/50, 10 Dogs and 10 Cats out of 50 domesticated animals).
However, if A is true, then you have a world where there are 50 Dogs, 25 of them domesticated (of which 15 wear scarfs), 25 of them wild (of which 5 wear scarfs). Cats maintain the old stats: 50 Cats, 25 of them domesticated (of which 10 wear scarfs), 25 of them wild (of which 10 wear scarfs).
Then, if you saw a domesticated animal in this world, it would have the same 50% chance of being a dog (25/50, 25 dogs out of 50 domesticated animals) but would have 50% (25/50, 15 Dogs and 10 Cats out of 50 domesticated animals).
As you can see, if you say that A is true, then if you saw a domesticated animal wearing a scarf in the world, it would be more likely a Dog (60% or 15/25) than any other animal (in this case Cat, 40% or 10/25).
New contributor
edited 7 hours ago
New contributor
answered 9 hours ago
H4uZH4uZ
112
112
New contributor
New contributor
$begingroup$
This is the line I have a problem with "A domesticated animal wearing a scarf is more likely to be a dog than another animal." When we made our initial statement we did not make any claim on the other animals that could wear scarfs. There could be 100s. We only made a statement about dogs.
$endgroup$
– Rahul Deora
8 hours ago
$begingroup$
See if my edit helps with your particular issue.
$endgroup$
– H4uZ
7 hours ago
add a comment |
$begingroup$
This is the line I have a problem with "A domesticated animal wearing a scarf is more likely to be a dog than another animal." When we made our initial statement we did not make any claim on the other animals that could wear scarfs. There could be 100s. We only made a statement about dogs.
$endgroup$
– Rahul Deora
8 hours ago
$begingroup$
See if my edit helps with your particular issue.
$endgroup$
– H4uZ
7 hours ago
$begingroup$
This is the line I have a problem with "A domesticated animal wearing a scarf is more likely to be a dog than another animal." When we made our initial statement we did not make any claim on the other animals that could wear scarfs. There could be 100s. We only made a statement about dogs.
$endgroup$
– Rahul Deora
8 hours ago
$begingroup$
This is the line I have a problem with "A domesticated animal wearing a scarf is more likely to be a dog than another animal." When we made our initial statement we did not make any claim on the other animals that could wear scarfs. There could be 100s. We only made a statement about dogs.
$endgroup$
– Rahul Deora
8 hours ago
$begingroup$
See if my edit helps with your particular issue.
$endgroup$
– H4uZ
7 hours ago
$begingroup$
See if my edit helps with your particular issue.
$endgroup$
– H4uZ
7 hours ago
add a comment |
$begingroup$
There is a confusion here between causation and correlation. So I'll give you an example where the exact opposite happens.
Some people are rich, some are poor. Some poor people are given benefits, which makes them less poor. But people who get benefits are still more likely to be poor, even with benefits.
If you are given benefits, that makes it more likely that you can afford cinema tickets. ("Makes it more likely" meaning causality). But if you can afford cinema tickets, that makes it less likely that you are among the people who are poor enough to get benefits, so if you can afford cinema tickets, you are less likely to get benefits.
$endgroup$
5
$begingroup$
This isn't an answer to the question. Interesting, but not an answer. In fact, it's talking about a different scenario; the reason the opposite happens is that it's using two different metrics that are named similarly (poor without benefits v.s. poor with benefits) and as such is a completely different scenario.
$endgroup$
– wizzwizz4
yesterday
add a comment |
$begingroup$
There is a confusion here between causation and correlation. So I'll give you an example where the exact opposite happens.
Some people are rich, some are poor. Some poor people are given benefits, which makes them less poor. But people who get benefits are still more likely to be poor, even with benefits.
If you are given benefits, that makes it more likely that you can afford cinema tickets. ("Makes it more likely" meaning causality). But if you can afford cinema tickets, that makes it less likely that you are among the people who are poor enough to get benefits, so if you can afford cinema tickets, you are less likely to get benefits.
$endgroup$
5
$begingroup$
This isn't an answer to the question. Interesting, but not an answer. In fact, it's talking about a different scenario; the reason the opposite happens is that it's using two different metrics that are named similarly (poor without benefits v.s. poor with benefits) and as such is a completely different scenario.
$endgroup$
– wizzwizz4
yesterday
add a comment |
$begingroup$
There is a confusion here between causation and correlation. So I'll give you an example where the exact opposite happens.
Some people are rich, some are poor. Some poor people are given benefits, which makes them less poor. But people who get benefits are still more likely to be poor, even with benefits.
If you are given benefits, that makes it more likely that you can afford cinema tickets. ("Makes it more likely" meaning causality). But if you can afford cinema tickets, that makes it less likely that you are among the people who are poor enough to get benefits, so if you can afford cinema tickets, you are less likely to get benefits.
$endgroup$
There is a confusion here between causation and correlation. So I'll give you an example where the exact opposite happens.
Some people are rich, some are poor. Some poor people are given benefits, which makes them less poor. But people who get benefits are still more likely to be poor, even with benefits.
If you are given benefits, that makes it more likely that you can afford cinema tickets. ("Makes it more likely" meaning causality). But if you can afford cinema tickets, that makes it less likely that you are among the people who are poor enough to get benefits, so if you can afford cinema tickets, you are less likely to get benefits.
answered yesterday
gnasher729gnasher729
48133
48133
5
$begingroup$
This isn't an answer to the question. Interesting, but not an answer. In fact, it's talking about a different scenario; the reason the opposite happens is that it's using two different metrics that are named similarly (poor without benefits v.s. poor with benefits) and as such is a completely different scenario.
$endgroup$
– wizzwizz4
yesterday
add a comment |
5
$begingroup$
This isn't an answer to the question. Interesting, but not an answer. In fact, it's talking about a different scenario; the reason the opposite happens is that it's using two different metrics that are named similarly (poor without benefits v.s. poor with benefits) and as such is a completely different scenario.
$endgroup$
– wizzwizz4
yesterday
5
5
$begingroup$
This isn't an answer to the question. Interesting, but not an answer. In fact, it's talking about a different scenario; the reason the opposite happens is that it's using two different metrics that are named similarly (poor without benefits v.s. poor with benefits) and as such is a completely different scenario.
$endgroup$
– wizzwizz4
yesterday
$begingroup$
This isn't an answer to the question. Interesting, but not an answer. In fact, it's talking about a different scenario; the reason the opposite happens is that it's using two different metrics that are named similarly (poor without benefits v.s. poor with benefits) and as such is a completely different scenario.
$endgroup$
– wizzwizz4
yesterday
add a comment |
$begingroup$
The intuition becomes clear if you look at the stronger statement:
If A implies B, then B makes A more likely.
Implication:
A true -> B true
A false -> B true or false
Reverse implication:
B true -> A true or false
B false -> A false
Obviously A is more likely to be true if B is known to be true as well, because if B was false then so would be A. The same logic applies to the weaker statement:
If A makes B more likely, then B makes A more likely.
Weak implication:
A true -> B true or (unlikely) false
A false -> B true or false
Reverse weak implication:
B true -> A true or false
B false -> A false or (unlikely) true
New contributor
$endgroup$
$begingroup$
I think what you are saying in the first statement is that in a venn diagram if A is contained in B, then if B is true n(A)/n(B) must be higher than n(A)/n(S) as B is a smaller space than S. Even in the second, you say likewise?
$endgroup$
– Rahul Deora
7 hours ago
$begingroup$
@RahulDeora - Yes, that's how it works. The weak version is much less obvious, but you already did the math anyway. What you asked for is the intuition behind the result, which can be best observed in the stronger statement.
$endgroup$
– Rainer P.
5 hours ago
add a comment |
$begingroup$
The intuition becomes clear if you look at the stronger statement:
If A implies B, then B makes A more likely.
Implication:
A true -> B true
A false -> B true or false
Reverse implication:
B true -> A true or false
B false -> A false
Obviously A is more likely to be true if B is known to be true as well, because if B was false then so would be A. The same logic applies to the weaker statement:
If A makes B more likely, then B makes A more likely.
Weak implication:
A true -> B true or (unlikely) false
A false -> B true or false
Reverse weak implication:
B true -> A true or false
B false -> A false or (unlikely) true
New contributor
$endgroup$
$begingroup$
I think what you are saying in the first statement is that in a venn diagram if A is contained in B, then if B is true n(A)/n(B) must be higher than n(A)/n(S) as B is a smaller space than S. Even in the second, you say likewise?
$endgroup$
– Rahul Deora
7 hours ago
$begingroup$
@RahulDeora - Yes, that's how it works. The weak version is much less obvious, but you already did the math anyway. What you asked for is the intuition behind the result, which can be best observed in the stronger statement.
$endgroup$
– Rainer P.
5 hours ago
add a comment |
$begingroup$
The intuition becomes clear if you look at the stronger statement:
If A implies B, then B makes A more likely.
Implication:
A true -> B true
A false -> B true or false
Reverse implication:
B true -> A true or false
B false -> A false
Obviously A is more likely to be true if B is known to be true as well, because if B was false then so would be A. The same logic applies to the weaker statement:
If A makes B more likely, then B makes A more likely.
Weak implication:
A true -> B true or (unlikely) false
A false -> B true or false
Reverse weak implication:
B true -> A true or false
B false -> A false or (unlikely) true
New contributor
$endgroup$
The intuition becomes clear if you look at the stronger statement:
If A implies B, then B makes A more likely.
Implication:
A true -> B true
A false -> B true or false
Reverse implication:
B true -> A true or false
B false -> A false
Obviously A is more likely to be true if B is known to be true as well, because if B was false then so would be A. The same logic applies to the weaker statement:
If A makes B more likely, then B makes A more likely.
Weak implication:
A true -> B true or (unlikely) false
A false -> B true or false
Reverse weak implication:
B true -> A true or false
B false -> A false or (unlikely) true
New contributor
New contributor
answered 8 hours ago
Rainer P.Rainer P.
101
101
New contributor
New contributor
$begingroup$
I think what you are saying in the first statement is that in a venn diagram if A is contained in B, then if B is true n(A)/n(B) must be higher than n(A)/n(S) as B is a smaller space than S. Even in the second, you say likewise?
$endgroup$
– Rahul Deora
7 hours ago
$begingroup$
@RahulDeora - Yes, that's how it works. The weak version is much less obvious, but you already did the math anyway. What you asked for is the intuition behind the result, which can be best observed in the stronger statement.
$endgroup$
– Rainer P.
5 hours ago
add a comment |
$begingroup$
I think what you are saying in the first statement is that in a venn diagram if A is contained in B, then if B is true n(A)/n(B) must be higher than n(A)/n(S) as B is a smaller space than S. Even in the second, you say likewise?
$endgroup$
– Rahul Deora
7 hours ago
$begingroup$
@RahulDeora - Yes, that's how it works. The weak version is much less obvious, but you already did the math anyway. What you asked for is the intuition behind the result, which can be best observed in the stronger statement.
$endgroup$
– Rainer P.
5 hours ago
$begingroup$
I think what you are saying in the first statement is that in a venn diagram if A is contained in B, then if B is true n(A)/n(B) must be higher than n(A)/n(S) as B is a smaller space than S. Even in the second, you say likewise?
$endgroup$
– Rahul Deora
7 hours ago
$begingroup$
I think what you are saying in the first statement is that in a venn diagram if A is contained in B, then if B is true n(A)/n(B) must be higher than n(A)/n(S) as B is a smaller space than S. Even in the second, you say likewise?
$endgroup$
– Rahul Deora
7 hours ago
$begingroup$
@RahulDeora - Yes, that's how it works. The weak version is much less obvious, but you already did the math anyway. What you asked for is the intuition behind the result, which can be best observed in the stronger statement.
$endgroup$
– Rainer P.
5 hours ago
$begingroup$
@RahulDeora - Yes, that's how it works. The weak version is much less obvious, but you already did the math anyway. What you asked for is the intuition behind the result, which can be best observed in the stronger statement.
$endgroup$
– Rainer P.
5 hours ago
add a comment |
protected by gung♦ 7 hours ago
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
$begingroup$
I edited the question to removed the word 'make'. This question sounded a bit like those ambiguous questions on Facebook, those where you have to solve some algebraic sum with pictures and people get widely different answers due to different interpretations of the question. That is not something we want here. (an alternative is to close the question for being unclear and have the OP change it).
$endgroup$
– Martijn Weterings
1 hour ago