Page 1 of 2 12 LastLast
Results 1 to 30 of 36

Thread: Question about convection in stars

  1. #1
    Join Date
    Nov 2004
    Posts
    5,578

    Question about convection in stars

    I had always assumed that the reason red dwarfs live practically forever is due to being fully convective. But red supergiants are fully convective and they die in an eyeblink. I#m missing something but I don't know what.
    "Occam" is the name of the alien race that will enslave us all eventually. And they've got razors for hands. I don't know if that's true but it seems like the simplest answer."

    Stephen Colbert.

  2. #2
    Join Date
    Jul 2005
    Location
    Massachusetts, USA
    Posts
    22,052
    Quote Originally Posted by parallaxicality View Post
    I had always assumed that the reason red dwarfs live practically forever is due to being fully convective. But red supergiants are fully convective and they die in an eyeblink. I#m missing something but I don't know what.
    I think you've over simplified your statement about Red Dwarfs. In part they keep going for a very long time because they are very slow at using their Hydrogen. The convection helps too, especially with classes M3-M8. M0-M2 don't convect nearly as large a portion of their mass through the core. That being said, Red Supergiants seem to have a big convective layer, but that layer doesn't include swapping out material from the core.
    Forming opinions as we speak

  3. #3
    Join Date
    Jun 2003
    Posts
    8,681
    Quote Originally Posted by antoniseb View Post
    . That being said, Red Supergiants seem to have a big convective layer, but that layer doesn't include swapping out material from the core.
    Surely if red giants have a thick convective layer, the cells must be tens or even hundreds of millions of kilometres deep; this suggests that they are also tens or hundreds of kilometres wide, otherwise they would look more like tangled string in cross-section.
    Last edited by eburacum45; 2020-Feb-03 at 07:12 PM.

  4. #4
    Join Date
    Sep 2003
    Posts
    13,075
    Quote Originally Posted by parallaxicality View Post
    I had always assumed that the reason red dwarfs live practically forever is due to being fully convective. But red supergiants are fully convective and they die in an eyeblink. I#m missing something but I don't know what.
    But is it true that greater convection extends star life? Does it feed more hydrogen to the core or something, because convection transfers heat faster, which should mean it burns faster?
    We know time flies, we just can't see its wings.

  5. #5
    Join Date
    Oct 2018
    Posts
    16
    I am not going to pretend to be an expert here, but is comparing a main sequence star to one not fusing hydrogen an apples and oranges situation? Red giants and super giants are ancient, evolved stars that can barley maintain hydrostatic equilibrium once they have exhausted their primary fuel supply into their secondary and tertiary reservoirs and and so on.

  6. #6
    Join Date
    Mar 2007
    Location
    Falls Church, VA (near Washington, DC)
    Posts
    8,944
    Quote Originally Posted by parallaxicality View Post
    I had always assumed that the reason red dwarfs live practically forever is due to being fully convective. But red supergiants are fully convective and they die in an eyeblink. I#m missing something but I don't know what.
    The supergiants were once main sequence stars with much greater masses. The thermodynamics of stars are such that the more massive ones burn at a disproportionately faster rate, and thus burn out much sooner. The effects of convection or lack thereof are trifling by comparison.

  7. #7
    Join Date
    Oct 2005
    Posts
    27,081
    Quote Originally Posted by Hornblower View Post
    The supergiants were once main sequence stars with much greater masses. The thermodynamics of stars are such that the more massive ones burn at a disproportionately faster rate, and thus burn out much sooner. The effects of convection or lack thereof are trifling by comparison.
    A star doesn't change luminosity much when it becomes a red supergiant. but it does go more convective, and it does have a shorter iifetime. You are right that convection does not play much direct role in the lifetime of a star, because it is other factors that determine the age of each stage, based on what is changing inside the star and at what rate. The key thing that complete convection over most of the star will do is make the surface temperature be red, so that's very important for the luminosity in an indirect way, in connection with the radius.
    Last edited by Ken G; 2020-Feb-04 at 06:10 AM.

  8. #8
    Join Date
    Oct 2005
    Posts
    27,081
    Quote Originally Posted by parallaxicality View Post
    I had always assumed that the reason red dwarfs live practically forever is due to being fully convective. But red supergiants are fully convective and they die in an eyeblink. I#m missing something but I don't know what.
    It is true that a fully convective star will gain access to more nuclear fuel, extending its lifetime significantly. But that by itself is not enough to explain why their lifetimes are so vey long, for that you have to understand why their luminosity is so low. So it's not so much that they have more fuel, it's that they burn the fuel they have way more slowly.

    A pretty complicated connection between the opacity and the structure of a fully convective star dictate that fully convective stars are "red", meaning, their surface temperature is something like 3000 K. Then we have the luminosity is proportional to the surface T to the 4th power times the radius squared (the latter coming from surface area). Since the temperature is pretty cool, and the radius is rather small, the luminosity is very small, maybe ten thousand times lower than the Sun. That's the main reason they live so long.

    As for any direct connection between convection and luminosity, realize that convection can be very efficient at carrying heat, but it can also carry very little heat if that's what the star requires. It just provides whatever energy is, or isn't, needed, and when the star is red and small, that's just not much at all.

  9. #9
    Join Date
    Sep 2003
    Posts
    13,075
    Quote Originally Posted by Ken G View Post
    As for any direct connection between convection and luminosity, realize that convection can be very efficient at carrying heat, but it can also carry very little heat if that's what the star requires. It just provides whatever energy is, or isn't, needed, and when the star is red and small, that's just not much at all.
    So, taking a stab at this, the luminosity of the star is fixed by the core luminosity and the faster transfer by the outer convective zone will expand the outer layer, more than otherwise, with the result of a cooler surface. The larger surface area will match the inner luminosity with the reduced temperature. Perhaps work and entropy produce the fine details on the temperature drop more than pure expansion.

    How are them red apples?
    We know time flies, we just can't see its wings.

  10. #10
    Join Date
    Oct 2005
    Posts
    27,081
    Stars have an interesting logic, and it varies a lot from situation to situation. Sometimes the logic works inside-out, where the luminosity is determined by what is going on in the core and it just pushes that through the envelope of the star (that's more or less what happens in main sequence stars), and other times, it's outside-in, where what is happening at th surface dictates how much luminosity will be pulled out from the interior of the star (that's what happens in effect for red dwarfs). The red dwarf is highly convective, which by a complicated interaction with the opacity dictates that it will be red at the surface, and its radius is set by the need for the star to be degenerate, like a low-mass white dwarf. Convection easily carries whatever excess heat is needed to replace what is radiated from the surface, that's what makes the luminosity outside-in.
    Last edited by Ken G; 2020-Feb-05 at 12:59 AM.

  11. #11
    Join Date
    Sep 2003
    Posts
    13,075
    Quote Originally Posted by Ken G View Post
    Stars have an interesting logic, and it varies a lot from situation to situation. Sometimes the logic works inside-out, where the luminosity is determined by what is going on in the core and it just pushes that through the envelope of the star (that's more or less what happens in main sequence stars), and other times, it's outside-in, where what is happening at th surface dictates how much luminosity will be pulled out from the interior of the star (that's what happens in effect for red dwarfs). The red dwarf is highly convective, which by a complicated interaction with the opacity dictates that it will be red at the surface, and its radius is set by the need for the star to be degenerate, like a low-mass white dwarf. Convection easily carries whatever excess heat is needed to replace what is radiated from the surface, that's what makes the luminosity outside-in.
    Interesting. So is the cooler (red) surface due to greater opacity that restricts transfer, builds internal temperatures whereby the core's "thermostat" regulates the luminosity as a result, thus outer to inner as you say?
    We know time flies, we just can't see its wings.

  12. #12
    Join Date
    Oct 2005
    Posts
    27,081
    Quote Originally Posted by George View Post
    Interesting. So is the cooler (red) surface due to greater opacity that restricts transfer, builds internal temperatures whereby the core's "thermostat" regulates the luminosity as a result, thus outer to inner as you say?
    The red color is hard to explain in a simple way, it is called the "Hayashi track" where the consequence of complete convection is that the surface opacity and surface temperature regulate each other to come to a narrow temperature range. It has to do with how sensitive the opacity is to temperature at lower surface temperature, so small adjustments make a big effect.

  13. 2020-Feb-05, 06:03 AM

  14. #13
    Join Date
    Feb 2009
    Posts
    2,255
    Quote Originally Posted by Ken G View Post
    Stars have an interesting logic, and it varies a lot from situation to situation. Sometimes the logic works inside-out, where the luminosity is determined by what is going on in the core and it just pushes that through the envelope of the star (that's more or less what happens in main sequence stars), and other times, it's outside-in, where what is happening at th surface dictates how much luminosity will be pulled out from the interior of the star (that's what happens in effect for red dwarfs). The red dwarf is highly convective, which by a complicated interaction with the opacity dictates that it will be red at the surface, and its radius is set by the need for the star to be degenerate, like a low-mass white dwarf. Convection easily carries whatever excess heat is needed to replace what is radiated from the surface, that's what makes the luminosity outside-in.
    Red dwarfs are on main sequence.
    How do you explain the huge differences in luminosity across the red dwarfs - from M3 to M8?

  15. #14
    Join Date
    Mar 2007
    Location
    Falls Church, VA (near Washington, DC)
    Posts
    8,944
    Quote Originally Posted by chornedsnorkack View Post
    Red dwarfs are on main sequence.
    How do you explain the huge differences in luminosity across the red dwarfs - from M3 to M8?
    Over that range the mass goes from about 1/2 to 1/10 solar mass, if I am not mistaken. The luminosity is very sensitive to the mass.

  16. #15
    Join Date
    Feb 2009
    Posts
    2,255
    Quote Originally Posted by Hornblower View Post
    Over that range the mass goes from about 1/2 to 1/10 solar mass, if I am not mistaken. The luminosity is very sensitive to the mass.
    "Red dwarfs" in classes M0 to M2, with mass range 0,25 to 0,5 solar masses, are only astrological red dwarfs. There is no genuine astrophysical difference between M0 and K7 - sic! there is no K8 or K9.
    Around 0,25 solar masses and M3, tachocline reaches the centre. So the real red dwarfs range from there to about 0,08 solar masses and M8 - the end of main sequence against the dropoff to brown dwarfs.
    How does the supposed outside-in control of luminosity, for fixed radius and temperature, allow the large sensitivity of red dwarf luminosity to mass?

  17. #16
    Join Date
    Nov 2004
    Posts
    5,578
    astrological?
    "Occam" is the name of the alien race that will enslave us all eventually. And they've got razors for hands. I don't know if that's true but it seems like the simplest answer."

    Stephen Colbert.

  18. #17
    Join Date
    Oct 2005
    Posts
    27,081
    Quote Originally Posted by chornedsnorkack View Post
    Red dwarfs are on main sequence.
    Good point, I should have said stars that are radiative instead of convective, like typical main sequence stars but not red dwarfs! Those little guys get overlooked sometimes.
    How do you explain the huge differences in luminosity across the red dwarfs - from M3 to M8?
    Fair question. My answer allows variation in radius among the red dwarfs of different mass, but there is also surface temperature variation stemming both from the incompleteness of that simplification, and from the fact that what gets counted as red dwarfs vary in how fully convective they are. So we can think of the lower main sequence as a transition from radiative stars to fully convective at the very bottom. I was only talking about the fully convective ones at the bottom, the "reddest" dwarfs if you wlll. Then there's the transition to brown dwarfs and so on, so these names are rather arbitrary. Classifications are more like points of reference, like the letters and numbers you see in a parking lot to help you find your car, than they are boxes to separate things. So I'm describing the physics of the point of reference, moreso than the specific attributes of every member that gets put in the box. After all, the OP question was about the lifetimes of fully convective stars, moreso than about the different things that get called red dwarfs.
    Last edited by Ken G; 2020-Feb-06 at 01:39 PM.

  19. #18
    Join Date
    Feb 2009
    Posts
    2,255
    In the solar neighbourgood, the theoretically proposed limit (at about 0,25 solar masses) for fully convective stars and stars with some radiative core seems to go between:
    Luytenīs Star, magnitude 11,97, M3,5, mass 0,26 solar
    Ross 614A, magnitude 13,09, M4,5, mass 0,22 solar.
    The end of main sequence seems to go between:
    GJ1245C, magnitude 18,46, M5,5, estimated at 0,07 solar mass
    and
    DEN 1048-3956, magnitude 19,37, M8,5, called brown dwarf
    SCR 1845-6357, magnitude 19,41, M8,5, called red dwarf.
    Ross 614 is nicely a binary, so red dwarfs of different mass can be compared and contrasted there.

  20. #19
    Join Date
    Oct 2005
    Posts
    27,081
    So you're saying that there's still a factor of 3 difference in masses of completely convective red dwarfs, and that's a very steep difference in luminosity, several orders of magnitude. Such a large difference could not be accounted for by the radius alone, so the stellar surface temperature must drop a lot at the low-mass end, in contradiction to the claim that fully convective stars always have similar surface T. I think the discrepancy there is that the Hayashi track logic I mentioned assumes an ideal gas, and when you get down to the 0.07 mass stars, you are getting into some serious quantum mechanical corrections, often called "degeneracy." It can't be highly degenerate or the fusion would go unstable and remove the degeneracy, but there must still be enough degeneracy to lower the luminosity a great deal. We can say that the onset of fusion at 0.07 mass marks a break between highly degenerate behavior at lower mass and much less degenerate behavior at higher mass, but there is a transition happening from 0.07 to 0.22 masses, and that transition comes with a big jump in luminosity, until you get to very ideal gases and the Hayashi behavior of fairly fixed surface temperature.
    Last edited by Ken G; 2020-Feb-07 at 04:52 PM.

  21. #20
    Join Date
    Feb 2009
    Posts
    2,255
    Quote Originally Posted by Ken G View Post
    so the stellar surface temperature must drop a lot at the low-mass end, in contradiction to the claim that fully convective stars always have similar surface T. I think the discrepancy there is that the Hayashi track logic I mentioned assumes an ideal gas, and when you get down to the 0.07 mass stars, you are getting into some serious quantum mechanical corrections, often called "degeneracy."
    Indees. Pretty obviously planets are not required to contract till they heat up to Hayashi track temperature.
    Quote Originally Posted by Ken G View Post
    It can't be highly degenerate or the fusion would go unstable and remove the degeneracy,
    No, it wouldnīt.
    The instability is nonlinear. It can only be resolved in one directiion. Fusion can go unstable, but cannot remove degeneracy - it can remove fusion.
    Quote Originally Posted by Ken G View Post
    but there must still be enough degeneracy to lower the luminosity a great deal. We can say that the onset of fusion at 0.07 mass marks a break between highly degenerate behavior at lower mass and much less degenerate behavior at higher mass,
    It is not "onset" of fusion.
    If you look at ideal gas sphere then its characteristic temperature rises with inverse of radius, going to infinity at zero radius.
    If you look at highly degenerate gas/liquid/solid then its characteristic temperature falls steeply with radius and goes to zero at nonzero radius.
    What happens with moderately degenerate gas is that the temperature goes through a maximum at some point and then falls.
    A protoplanet passes through that maximum in contraction.
    Now, the rate of heat loss depends on the internal temperature and conductivity. If convenction happens, it can be bigger than the conductive heat loss, but not smaller.
    When a protostar or protobrown dwarf heats up, thermonuclear fusion reactions start in the interior. As the rate of thermonuclear reactions increases, it is at first lower than the rate of heat loss.

    For brown dwarfs, the rate of thermonuclear reactions will not quite match the rate of energy loss, even at the maximum near (not precisely at) the maximum temperature.

    The start of main sequence is not "onset" of fusion but "failure to shut down" fusion.

  22. #21
    Join Date
    Oct 2005
    Posts
    27,081
    Quote Originally Posted by chornedsnorkack View Post
    Indees. Pretty obviously planets are not required to contract till they heat up to Hayashi track temperature.
    And planets are not fully convective, so there wouldn't even be any reason to think in terms of a Hayashi track for them. But the main point is that very low mass objects are more strongly affected by quantum mechanical effects, an additional element to the answer to the OP that I did not include but is certainly part of a more complete answer.
    No, it wouldnīt.
    The instability is nonlinear. It can only be resolved in one directiion. Fusion can go unstable, but cannot remove degeneracy - it can remove fusion.
    That's not true. Of course fusion can remove degeneracy, in fact it will do just that to the Sun after it begins fusing helium. Fusion in significantly degenerate gas is always unstable and always reduces the degeneracy. Thus degeneracy can never stop fusion once it begins, it must prevent it from ever beginning, which is what it does in lower mass objects like gas giants, and also brown dwarfs when it comes to normal H fusion.
    It is not "onset" of fusion.
    If you look at ideal gas sphere then its characteristic temperature rises with inverse of radius, going to infinity at zero radius.
    Yes, that's called the "virial theorem", but in no way does that imply we should not regard fusion as having an "onset" at some stellar mass. What is happening is that degeneracy sets a peak temperature that a star can achieve, and at low mass, that temperature is too cool for fusion. At higher mass, it is not too cool for fusion-- so we have a minimum mass for the onset of fusion.
    If you look at highly degenerate gas/liquid/solid then its characteristic temperature falls steeply with radius and goes to zero at nonzero radius.
    Of course, that's why there is a minimum mass for the onset of fusion.
    What happens with moderately degenerate gas is that the temperature goes through a maximum at some point and then falls.
    That's what degeneracy always does-- degeneracy is just precisely the quantum mechanical effect that lowers temperature relative to energy per particle. It works to set a mass-dependent peak temperature as a gaseous sphere contracts, and one then must ask if that peak temperature is enough to have fusion or not. If it is, fusion will have its way.
    A protoplanet passes through that maximum in contraction.
    So do stars, ultimately leading to the white dwarf phenomenon.
    The start of main sequence is not "onset" of fusion but "failure to shut down" fusion.
    Again, degeneracy can never "shut down" fusion that is occurring, but it can sure keep it from ever happening. If the temperature gets high enough to initiate fusion, we have an onset of fusion. That's not any different from a failure to not have an onset of fusion. Why do you think that distinction is necessary to make, it all seems pretty clear to me and it looks like an onset of fusion for stars with more than 0.07 solar masses, and no onset of fusion for stars with less than that.
    Last edited by Ken G; 2020-Feb-08 at 09:59 AM.

  23. #22
    Join Date
    Feb 2009
    Posts
    2,255
    Quote Originally Posted by Ken G View Post
    That's not true. Of course fusion can remove degeneracy, in fact it will do just that to the Sun after it begins fusing helium. Fusion in significantly degenerate gas is always unstable and always reduces the degeneracy. Thus degeneracy can never stop fusion once it begins, it must prevent it from ever beginning, which is what it does in lower mass objects like gas giants, and also brown dwarfs when it comes to normal H fusion.
    But the degeneracy of the gas allows fusion to be unstable and stop.
    Quote Originally Posted by Ken G View Post
    Yes, that's called the "virial theorem", but in no way does that imply we should not regard fusion as having an "onset" at some stellar mass. What is happening is that degeneracy sets a peak temperature that a star can achieve, and at low mass, that temperature is too cool for fusion. At higher mass, it is not too cool for fusion-- so we have a minimum mass for the onset of fusion.Of course, that's why there is a minimum mass for the onset of fusion.
    That's what degeneracy always does-- degeneracy is just precisely the quantum mechanical effect that lowers temperature relative to energy per particle. It works to set a mass-dependent peak temperature as a gaseous sphere contracts, and one then must ask if that peak temperature is enough to have fusion or not. If it is, fusion will have its way.So do stars, ultimately leading to the white dwarf phenomenon.
    Again, degeneracy can never "shut down" fusion that is occurring, but it can sure keep it from ever happening. If the temperature gets high enough to initiate fusion, we have an onset of fusion. That's not any different from a failure to not have an onset of fusion. Why do you think that distinction is necessary to make, it all seems pretty clear to me and it looks like an onset of fusion for stars with more than 0.07 solar masses, and no onset of fusion for stars with less than that.
    Because fusion is not just a matter of "have fusion or not".
    Sun has core temperature of 15 million K. Low mass red dwarfs have core temperature of 4 million K, and thatīs enough to sustain fusion.

    Obviously when young Sun had core temperature of just 5 million K and was much bigger than now, it was already hotter than red dwarfs. Young Sun must therefore already have had fusion.
    The difference is that young Sun being much more tenuous than a red dwarf conducted heat away more readily, so the nonzero but modest amount of fusion that is sufficient to sustain a red dwarf was not sufficient to stop young Sun from contracting, till Sun did heat up to 15 million K.
    If a red dwarf at 4 million K has fusion in amounts sufficient to meet its heat losses, then it makes sense that a young brown dwarf at 3 million K also has significant amounts of fusion - just a bit less than the heat loss to conduction, so the brown dwarf continues Helmholtz contraction. But due to the extent of degeneracy, instead of heating up as it shrinks like a young red dwarf does, the young brown dwarf passes a temperature maximum without quite balancing the energy losses, and as the temperature falls, the fusion energy production drops (faster than the conductive heat loss).

  24. #23
    Join Date
    Oct 2005
    Posts
    27,081
    Quote Originally Posted by chornedsnorkack View Post
    But the degeneracy of the gas allows fusion to be unstable and stop.
    Instability can go either way. If it stops, it stops for a short time, but eventually it will go the other way and increase the fusion. That's what cannot be reversed, once fusion initiates in substantially degenerate gas, and you get a perturbation to generate even faster fusion, it just goes faster and faster until the degeneracy is suitably lifted. When the Sun does that, it will be called the "helium flash."
    Because fusion is not just a matter of "have fusion or not".
    Actually, it kind of is. Nothing is ever "black or white," but making a clear distinction can be insightful. In this case, the value of the distinction is that if fusion is ever fast enough to go unstable in degenerate gas (where "fast enough" means that the fusion rate can add excess heat faster than it can be removed, i.e., if it is still unstable after accounting for the heat loss timescale), then that unstable fusion will just play out until it is stabilized by the removal of the degeneracy. After that, the stabilized fusion will pause the evolution until the fusion has played out. It's very much an "on/off switch" in that sense, and that is what motivates the concept of a "main sequence star."
    Sun has core temperature of 15 million K. Low mass red dwarfs have core temperature of 4 million K, and thatīs enough to sustain fusion.
    The key issue is whether there is enough fusion to compete with the rate of heat loss. It is true that right at the boundary of the "onset" of fusion, you can have stars whose fusion rate never quite reaches the rate the star is losing heat. The mass where those two rates do equal each other is meant by the "onset" mass. Above that mass, fusion will pause the evolution, and below it, fusion will never be of consequence. It's a pretty sharp transition between the two because the fusion rate is highly temperature sensitive. But the key point is, if the fusion rate is able to match the rate of heat loss, then the star will cease to get more degenerate until all the fusable fuel is used up. Degeneracy must await the end of fusion, not the other way around. But on the other side of the "onset" distinction, degeneracy will kick in before fusion is ever significant, and that will preclude it from ever getting significant. So what this means is, the language you are using is appropriate only below the mass cutoff, whereas talking about a fusion "onset" is focusing on what is happening above that cutoff. Still, the rapid transition between these two regimes is exactly what is intended by the "onset" language.
    Obviously when young Sun had core temperature of just 5 million K and was much bigger than now, it was already hotter than red dwarfs. Young Sun must therefore already have had fusion.
    What matters is the temperature where the fusion rate can match the rate heat is being lost. The Sun had a much higher luminosity than a red dwarf when fusion first became important for the Sun, so the "onset" of fusion for the Sun came at a higher luminosity and a higher core temperature. But that was also a fairly rapid transition from fusion not mattering, to fusion pausing the evolution of the star. Indeed, the way to know what the core temperature of a main-sequence star will be is to ask what luminosity the star has, and then let the fusion self-regulate to supply that luminosity-- that's what physically sets the core temperature of a main-sequence star. Below what is being called the onset mass, the fusion rate never matches the stellar luminosity, so can be neglected. Even Jupiter has some fusion rate, it's just way too tiny to matter for anything, but a star at 0.07 solar masses will be able to meet its luminosity needs via fusion-- at least until the fusable fuel runs out.
    The difference is that young Sun being much more tenuous than a red dwarf conducted heat away more readily, so the nonzero but modest amount of fusion that is sufficient to sustain a red dwarf was not sufficient to stop young Sun from contracting, till Sun did heat up to 15 million K.
    Tenuousness doesn't matter, you can tell that because the Sun did not change its luminosity much as it contracted toward the main sequence. It turns out that for radiative stars, only the mass matters in determining the luminosity, not the radius or density. A higher mass star has a much higher luminosity, so its core must get hotter before we can say fusion has begun in earnest, but a higher mass star also reaches higher core temperature before degeneracy sets in, so it works out that it is the higher mass stars that have fusion "onset" (meaning, a fusion rate that matches their luminosity).
    If a red dwarf at 4 million K has fusion in amounts sufficient to meet its heat losses, then it makes sense that a young brown dwarf at 3 million K also has significant amounts of fusion - just a bit less than the heat loss to conduction, so the brown dwarf continues Helmholtz contraction.
    The brown dwarf might not ever reach enough of a fusion rate to match its luminosity, or maybe it does for a short while but it's not really the defining element of the brown dwarf, because it's not fusion of a copious element like hydrogen-- so what sustains the star must be the heat released by gravitational contraction, making it more like a gas giant.
    But due to the extent of degeneracy, instead of heating up as it shrinks like a young red dwarf does, the young brown dwarf passes a temperature maximum without quite balancing the energy losses, and as the temperature falls, the fusion energy production drops (faster than the conductive heat loss).
    That is true, but that is only true for stars for whom there has not been a fusion onset-- meaning, fusion has never supplied the luminosity of that star, or if it did for a short while, it isn't doing so when the star went degenerate. So what is going on is, we have a pretty clear distinction between stars for whom fusion is of central importance (and for them degeneracy must always wait until the fusable fuel is gone), and stars for whom fusion is not important (and they can make sure fusion never becomes important by going degenerate). The first group is what is meant by stars that have had a fusion onset, and the second group have not.

    A lot of what we are saying is the same thing, it's just taking a different tack on the meaning of the word "onset." My main point is that for degeneracy to prevent fusion from occurring in earnest it must do so before fusion passes a kind of "point of no return", and that's the onset concept. Once passed that "tipping point", fusion will destroy the degeneracy, so degeneracy must kick in prior to that (as it does for brown dwarfs) or it must wait for fusion to be over (as it will do for the future Sun). That distinction is what is meant by saying the Sun has had hydrogen fusion onset, and a brown dwarf has not and never will.
    Last edited by Ken G; 2020-Feb-08 at 02:24 PM.

  25. #24
    Join Date
    Feb 2009
    Posts
    2,255
    Quote Originally Posted by Ken G View Post
    Instability can go either way. If it stops, it stops for a short time, but eventually it will go the other way and increase the fusion.
    Not in a degenerate gas. Because in ideal gas, heat loss causes contraction and heating up which increases fusion. In degenerate matter, heat loss causes cooling and decreases fusion.
    Quote Originally Posted by Ken G View Post
    That's what cannot be reversed, once fusion initiates in substantially degenerate gas, and you get a perturbation to generate even faster fusion, it just goes faster and faster until the degeneracy is suitably lifted. When the Sun does that, it will be called the "helium flash."
    Thatīs because the helium core in deep interior of Sun is heated to increasing temperatures by non-degenerate hydrogen shell while already degenerate.
    A protostar or a protoplanet approaches degeneracy or fusion from the other direction. It undergoes contraction and heating up while not degenerate, and then either becomes degenerate and cools down or sustains fusion and does not get more degenerate than it already is.
    Quote Originally Posted by Ken G View Post
    Actually, it kind of is. Nothing is ever "black or white," but making a clear distinction can be insightful. In this case, the value of the distinction is that if fusion is ever fast enough to go unstable in degenerate gas (where "fast enough" means that the fusion rate can add excess heat faster than it can be removed, i.e., if it is still unstable after accounting for the heat loss timescale), then that unstable fusion will just play out until it is stabilized by the removal of the degeneracy.
    Again, I stress - degeneracy is not removed, it is prevented, not having occurred on the path to stable state.
    And the ideal gas behaviour is steep rise of the temperature with inverse radius. Therefore, on the approach to the maximum of temperature, as the rise of temperature slows down, the behaviour is already significantly nonideal and partially degenerate. For the stars where the fusion is stabilized near but before the temperature maximum, they are significantly degenerate on main sequence.
    Quote Originally Posted by Ken G View Post
    Tenuousness doesn't matter, you can tell that because the Sun did not change its luminosity much as it contracted toward the main sequence. It turns out that for radiative stars, only the mass matters in determining the luminosity, not the radius or density. A higher mass star has a much higher luminosity, so its core must get hotter before we can say fusion has begun in earnest, but a higher mass star also reaches higher core temperature before degeneracy sets in, so it works out that it is the higher mass stars that have fusion "onset" (meaning, a fusion rate that matches their luminosity).
    Tenuousness matters because if you compare low-mass star and high-mass star of equal central temperatures, the high mass star is required to have higher luminosity. And higher luminosity from two factors - larger area and lower density.

  26. #25
    Join Date
    Oct 2005
    Posts
    27,081
    Quote Originally Posted by chornedsnorkack View Post
    Not in a degenerate gas. Because in ideal gas, heat loss causes contraction and heating up which increases fusion. In degenerate matter, heat loss causes cooling and decreases fusion.
    Yes I know, that's consistent with everything I said. If you take a perturbation of lower heating in a gas that is degenerate enough to suffer from the instability I am talking about, the fusion turns off completely, that's a consequence of being unstable. But then wait a little bit, and use a perturbation that increases heating-- it will run away, that is also a consequence of being unstable. The former can happen many times, but the latter only needs to happen once. This is what makes it inevitable that degenerate gas at the threshold of the necessary conditions where the fusion rate can match the heat transport rate will undergo fusion runaway. That's called the helium flash for the future of our Sun, but the same holds in any situation where you have fusion in degenerate enough gas.
    Thatīs because the helium core in deep interior of Sun is heated to increasing temperatures by non-degenerate hydrogen shell while already degenerate.
    What matters is how degenerate it is, of course.
    A protostar or a protoplanet approaches degeneracy or fusion from the other direction. It undergoes contraction and heating up while not degenerate, and then either becomes degenerate and cools down or sustains fusion and does not get more degenerate than it already is.
    I know, that's what I've been saying. When the mass is higher, fusion tends to initiate prior to degeneracy being significant, and when lower, it's the opposite. But in situations where the degeneracy has managed to appear first and yet the threshold for significant fusion is reached all the same (typically when the mass is growing slowly), the onset of fusion appears with an unstable runaway that removes the degeneracy until it is no longer unstable. The history is how it got there is of no consequence, what matters is simply how degenerate it is when the fusion threshold is reached, or if it is never reached at all. The concept of a "threshold" is well defined-- it is a place where the fusion rate adds heat faster than it can be transported away, so will result in a net heating that the star must respond to. That response will either stabilize, or unstabilize, that fusion rate, depending on the degeneracy.
    Again, I stress - degeneracy is not removed, it is prevented, not having occurred on the path to stable state.
    Sometimes it works that way, sometimes the degeneracy has become pretty advanced but fusion occurs anyway, and then it starts with a runaway that removes the degeneracy. It all depends on the larger situation.

    Let's take a step back and see what is being claimed here. I am saying that the concept of fusion "onset" in main-sequence stars is a perfectly well formed idea. The issue is whether the fusion rate is ever able to match the star's luminosity-- if so, we say we have the "onset of fusion." If not, we simply neglect fusion. This works fine except very close to the place where the line is drawn, but such is the nature of drawing lines. Degeneracy does not alter these statements in any way, all degeneracy can do is cause the fusion rate to peak before it can match the luminosity, i.e., it can prevent the onset of fusion (that's what sets the low-mass end of the main sequence). If degeneracy is unable to prevent the onset, then the fusion rate will match the luminosity. This typically happens when the degeneracy is still weak, too weak to have much of an effect at all, and since there is no net loss of heat, the degeneracy will also not grow. It must wait until all the fusion reserves have been used up. So what we have is a very clear dividing line between where degeneracy prevents the fusion rate from being able to grow to the level of the luminosity, and where the growth of the fusion rate to the level of the luminosity prevents degeneracy from being able to grow enough to do much of anything. It is a simple competition, degeneracy wins at low mass, fusion at high mass. That shift in the winner from degeneracy to fusion is what is known as "the onset of fusion" and the birth of a main-sequence star.
    And the ideal gas behaviour is steep rise of the temperature with inverse radius. Therefore, on the approach to the maximum of temperature, as the rise of temperature slows down, the behaviour is already significantly nonideal and partially degenerate. For the stars where the fusion is stabilized near but before the temperature maximum, they are significantly degenerate on main sequence.
    It's a purely subjective issue what constitutes "significantly." As I said, right near a dividing line, one will always find it difficult to apply either simplified limit, but that's not the point of a dividing line. The point is to distinguish two very different situations, accepting that in a narrow transition region you are always going to see some kind of hybrid situation. But if the transition is sharp, as it is here, this need not be regarded as a problem.
    Tenuousness matters because if you compare low-mass star and high-mass star of equal central temperatures, the high mass star is required to have higher luminosity. And higher luminosity from two factors - larger area and lower density.
    Tenuousness does not matter for a radiative star, that's a consequence of radiative diffusion and can be seen in evolutionary tracks of pre-main-sequence stars with masses greater than the Sun. Things get more complicated for the Sun, due to its large convection zone, where we start to see the hybrid Hayashi-like behavior where the lessening radius tends to reduce the luminosity (so losing tenuousness is not increasing luminosity, it is actually reducing it). This is a consequence of the much more gradual transition between the two regimes of mostly radiative and mostly convective, so idealizations become more difficult but still of value. But none of that is of great significance for the onset of fusion, the point is simply that some kind of physics determines the luminosity of the star, and whether or not we should say fusion has reached its "onset" is simply the question of whether the fusion rate has been able to rise enough to match that luminosity. If it has, degeneracy must wait in the wings if wasn't there yet, or be removed if it was. If the fusion threshold is not reached, degeneracy is generally the reason why it might never happen. It's basically that degeneracy can sometimes prevent fusion onset if it gets there first, and that's the difference between a star that gets called a red dwarf, and one that gets called a brown dwarf. But sometimes degeneracy gets there first but something else is changing that still allows fusion onset (typically rise in mass), and then the degeneracy is banished until fusion has taken its course. For that latter scenario to happen to a red dwarf, it would have to have experienced a growth in mass that took it from a brown dwarf mass to a red dwarf mass. What we can say about red dwarfs is, since they are undergoing stable fusion, there is a strict limit as to how degenerate they can be.

    Note this also raises the interesting possibility that a brown dwarf could be found with a mass above 0.07 solar masses. It would merely need to cool so much as a brown dwarf that even adding mass to it does not reach the fusion threshold. It also means that if the brown dwarf has not cooled much and is significantly but not highly degenerate when the mass is added, hydrogen fusion could initiate unstably. Who knows if it is even possible to get a miniature type Ia supernova that way, if the cooling and mass-adding timescales are well tuned.
    Last edited by Ken G; 2020-Feb-09 at 05:14 PM.

  27. #26
    Join Date
    Feb 2009
    Posts
    2,255
    Quote Originally Posted by Ken G View Post
    Yes I know, that's consistent with everything I said. If you take a perturbation of lower heating in a gas that is degenerate enough to suffer from the instability I am talking about, the fusion turns off completely, that's a consequence of being unstable. But then wait a little bit, and use a perturbation that increases heating-- it will run away, that is also a consequence of being unstable.
    No, it wonīt.
    If you wait a little bit, the bigger bit you wait, the bigger "perturbation" you need to heat the gas to the other side of unstable equilibrium, because the bigger the bit you wait, the colder the gas is.
    Quote Originally Posted by Ken G View Post
    The former can happen many times, but the latter only needs to happen once. This is what makes it inevitable that degenerate gas at the threshold of the necessary conditions where the fusion rate can match the heat transport rate will undergo fusion runaway.
    You have two nearby equilibria. The stable one at slightly bigger radius, and unstable at slightly smaller. If you deviate from the stable equilibrium towards the bigger radius and lower temperature, it is restored and can happen many times. If you pass the unstable equilibrium towards smaller radius and lower temperature, the runaway cooling can happen just once.
    Quote Originally Posted by Ken G View Post
    That's called the helium flash for the future of our Sun, but the same holds in any situation where you have fusion in degenerate enough gas.
    No. What you need is a procedure that gets you near the unstable equilibrium. And it may or may not get you past it.
    Quote Originally Posted by Ken G View Post
    Note this also raises the interesting possibility that a brown dwarf could be found with a mass above 0.07 solar masses. It would merely need to cool so much as a brown dwarf that even adding mass to it does not reach the fusion threshold. It also means that if the brown dwarf has not cooled much and is significantly but not highly degenerate when the mass is added, hydrogen fusion could initiate unstably. Who knows if it is even possible to get a miniature type Ia supernova that way, if the cooling and mass-adding timescales are well tuned.
    Protium fusion cannot be sped up so much due to the weak interaction step.
    Note that the red dwarf luminosity depends on conductivity. A highly conductive star would have a higher luminosity for a given internal temperature (and fusion energy production).
    Which means that a star of higly conductive gas may not be able to meet its luminosity by fusion where a less conductive star of equal mass would, and therefore be a brown dwarf where the less conductive star of the same mass is a red dwarf.

  28. #27
    Join Date
    Oct 2005
    Posts
    27,081
    Quote Originally Posted by chornedsnorkack View Post
    No, it wonīt.
    If you wait a little bit, the bigger bit you wait, the bigger "perturbation" you need to heat the gas to the other side of unstable equilibrium, because the bigger the bit you wait, the colder the gas is.
    No, that just isn't right. A negative perturbation can only turn off the fusion, it cannot take it into negative territory that removes heat! Instead, with no fusion, the star simply returns to the evolutionary pressures that brought it into the fusion domain in the first place, and will repeat that as many times as it takes to get fusion onset.

    Imagine you start out with an energy equilibrium that does not involve fusion. You have a heat loss mechanism, and something that is adding heat, but not fusion-- they essentially balance, but there is a tiny discrepancy that is causing evolution toward higher temperature, and there starts to be more and more of a fusion component. Indeed the fusion component is very temperature sensitive, so it is becoming more and more important very quickly. Eventually it is important enough that it is capable of adding heat faster than it can be transported out, and at that point we must assess its stability.

    If it is stable, as in the core of the Sun now (as you well know), the fusion rate will come to match the stellar luminosity, because any perturbation that makes the fusion rate less than that luminosity will cause contraction and temperature rise, raising the fusion rate. Perturbations that make the fusion rate too high will cause expansion and lower the fusion rate. But in highly degenerate conditions, a perturbation that reduces the fusion rate will return us to the previous situation where there was evolution toward higher temperature due to the other processes in the energy equation. So the unstable situation can never prevent the onset of fusion, there are evolutionary pressures that will eventually turn the fusion on. And that will happen as soon as the fusion rate has an opportunity to exceed the rate heat is transported out, because then any positive perturbation in the fusion rate will dump heat and cause temperature rise, leading to an even greater fusion rate. Instability, coupled with evolutionary pressure toward higher temperature, will always result in fusion runaway, and that will always result in removing the degeneracy until the fusion rate can either be stabilized, or has burnt up its fuel.
    You have two nearby equilibria. The stable one at slightly bigger radius, and unstable at slightly smaller. If you deviate from the stable equilibrium towards the bigger radius and lower temperature, it is restored and can happen many times. If you pass the unstable equilibrium towards smaller radius and lower temperature, the runaway cooling can happen just once.
    The equilibria to which you speak need not be "nearby", they can be widely separated-- it depends on the history of the system. What also depends on the history of the system is which way the system will evolve-- toward the stable equilibrium, or away from it. You can imagine a downward sloping track with a hump in its center-- where a cart rolled down that track ends up depends on where you start it. If it ends up at the bottom of the dip, that's stable fusion, and we say we have an onset of fusion. If it ends up somewhere else, fusion has been avoided. The unstable region is near the top of the hump, and again how the cart responds to that hump depends on its history.

    So let's look at the situation we are interested in, a low-mass protostar. We have two situations we can consider-- we can keep the mass constant and let it lose heat and contract, or we can also have it gain mass as it is losing heat. The former case is the more standard picture, so let's go with that. In force balance, with constant mass, the core temperature will rise to a peak, and then degeneracy will bring it back down. But if the peak is at a high enough T for fusion to be able to match the heat loss while the T is rising, then it will do so-- it will pause the net heat loss, and we will have a main-sequence star, stably fusing. We will say we had an "onset" of fusion. If, on the other hand, even at the peak T the fusion rate cannot match the luminosity, the evolution will continue downward to lower T. We will not say there has not been onset of significant fusion, and we will likely ignore fusion altogether unless we have some detailed question in mind. All this holds because no unstable region has been entered-- we never had fusion in the same breath as a high degree of degeneracy. In your language above, it means the history of the star has prepared it at the larger, stable radius, so it has no knowledge of the unstable one that is not "nearby."

    For example, when our Sun's core T was a little too low for significant fusion of H, it had a radius and core T, but if it instead had a much smaller radius and been highly degenerate, it could have also had a T slightly too low for significant fusion. The evolutionary pressures that led to the former history then caused stable fusion to initiate, whereas the hypothetical evolution that led to the latter situation would result in fusion runaway. Adding mass after significant degeneracy has been achieved, is the path to that latter scenario. To know which happens, we need to know the history of adding mass, versus losing heat-- we need to know where the cart is on that hill. But one thing we can be sure of is, if we ever get a high enough temperature for the fusion rate to exceed the rate heat is being lost, and the degeneracy at that point is also high, we can find ourselves in the unstable regime where an upward perturbation in the fusion rate will raise the temperature further, rather than lowering it as it would in the stable case. That will always lead to runaway, because a downward perturbation in the fusion rate, in that situation, can only take us to the edge of the domain of fusion-- where the evolutionary pressure that took us into the fusion regime will still be present, and will take us right back again, as many times as necessary to get the positive perturbation that causes runaway.

    Now since all this depends sensitively on the history of adding mass and losing heat, I don't know if protostars ever enter the domain of unstable fusion. Generally we imagine the star has ceased gaining mass before it ever gets that hot, so most protostars likely don't experience unstable fusion. Even so, the "onset" concept is perfectly valid, as described above. The situations where we think the history is conducive to unstable fusion is in the "helium flash" that our Sun will undergo, and in type Ia supernovae. It is also thought that carbon and silicon fusion in the cores of more massive stars will initiate with a "flash" also, though it is of little consequence. The point is, the combination of high degeneracy with a significant temperature-sensitive fusion rate always leads to some degree of fusion runaway.

    No. What you need is a procedure that gets you near the unstable equilibrium. And it may or may not get you past it.
    That's what's wrong in your argument. If there was evolutionary pressure to get to the unstable equilibrium (which is how it got there), then it will always lead to runaway.
    Protium fusion cannot be sped up so much due to the weak interaction step.
    Good point, it would be a very "mini" supernova indeed, not explosive at all.
    Note that the red dwarf luminosity depends on conductivity. A highly conductive star would have a higher luminosity for a given internal temperature (and fusion energy production).
    However it does it, the star has a way of setting its own luminosity, and then the stable fusion rate self-regulates to provide it.
    Which means that a star of higly conductive gas may not be able to meet its luminosity by fusion where a less conductive star of equal mass would, and therefore be a brown dwarf where the less conductive star of the same mass is a red dwarf.
    It would never be much of a problem to provide the luminosity. The protium fusion rate you mentioned, though slow by comparison to other rates in the process, can still get plenty high enough-- as it does in our Sun. That's the nature of strong temperature sensitivity-- it easily self-regulates to provide what is needed. So what should be said instead is that a star with a higher luminosity will have to evolve a bit further to reach the point where fusion can supply it, but it's helped by the way that higher luminosity will speed up its evolution-- that's all counted in the time it takes a low-mass star to reach the main sequence in the first place.
    Last edited by Ken G; 2020-Feb-11 at 05:28 PM.

  29. #28
    Join Date
    Feb 2009
    Posts
    2,255
    Quote Originally Posted by Ken G View Post
    Good point, it would be a very "mini" supernova indeed, not explosive at all. However it does it, the star has a way of setting its own luminosity, and then the stable fusion rate self-regulates to provide it.
    When fusion rate cannot meet the luminosity, the star contracts. When fusion rate exceeds luminosity, the star expands, or accelerates to.
    Importance of the slowness of protium fusion is that it remains at timescales slower than free fall timescale even when accelerated, due to those weak steps.
    Quote Originally Posted by Ken G View Post
    It would never be much of a problem to provide the luminosity. The protium fusion rate you mentioned, though slow by comparison to other rates in the process, can still get plenty high enough-- as it does in our Sun. That's the nature of strong temperature sensitivity-- it easily self-regulates to provide what is needed. So what should be said instead is that a star with a higher luminosity will have to evolve a bit further to reach the point where fusion can supply it, but it's helped by the way that higher luminosity will speed up its evolution-- that's all counted in the time it takes a low-mass star to reach the main sequence in the first place.
    No. Providing the luminosity IS a problem - thatīs what defines brown dwarfs. Young brown dwarfs produce significant amount of energy by protium fusion, but not enough to meet heat loss by conduction. Red dwarfs do provide enough energy.
    Which means you could have two young stars of equal mass, internal temperature and fusion energy productions, of which one has low conductivity and luminosity and meets the luminosity by fusion, being a red dwarf, but the other with higher conductivity has higher luminosity, which it cannot meet by fusion and never will be able to, and is a brown dwarf.

  30. #29
    Join Date
    Mar 2004
    Posts
    18,680
    Quote Originally Posted by chornedsnorkack View Post
    No. Providing the luminosity IS a problem - thatīs what defines brown dwarfs. Young brown dwarfs produce significant amount of energy by protium fusion, but not enough to meet heat loss by conduction.
    I thought limited/unsustainable P-P fusion only occurred in high mass brown dwarfs, with deuterium fusion and lithium burning in lower mass ones?

    Red dwarfs do provide enough energy.
    Which means you could have two young stars of equal mass, internal temperature and fusion energy productions, of which one has low conductivity and luminosity and meets the luminosity by fusion, being a red dwarf, but the other with higher conductivity has higher luminosity, which it cannot meet by fusion and never will be able to, and is a brown dwarf.
    Wouldn’t that be determined by metallicity, and only near the brown dwarf/red dwarf mass boundary?

    "The problem with quotes on the Internet is that it is hard to verify their authenticity." — Abraham Lincoln

    I say there is an invisible elf in my backyard. How do you prove that I am wrong?

    The Leif Ericson Cruiser

  31. #30
    Join Date
    Feb 2009
    Posts
    2,255
    Quote Originally Posted by Van Rijn View Post
    I thought limited/unsustainable P-P fusion only occurred in high mass brown dwarfs, with deuterium fusion and lithium burning in lower mass ones?

    Wouldn’t that be determined by metallicity, and only near the brown dwarf/red dwarf mass boundary?
    Yes. The topic that concentrated attention here was the question of what determines red dwarf luminosity. So the point of concentration was around the red dwarf/brown dwarf boundary.
    Now, metallicity has four effects I see - three less important, one more so:
    The density of gas at any pressure depends on metallicity. But that is a minor effect for stars containing under 2 % metals by mass.
    The speed of fusion at any temperature depends on protium mass fraction. But again, since the metallicity is under 2 %, protium mass fraction does not differ by much.
    The speed of fusion at any temperature also depends on the availability of C, N and O. But at the low temperatures of red dwarf, CNO processes are difficult, and pp and pep processes relatively easier.
    The important effect, however, is probably the effect of metallicity on conductivity.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •