Robert's Stochastic thoughts
Asymptotically we'll all be dead
Saturday, October 05, 2024
Treatment of Autoimmune Diseases
Directing Lymphokines to the desired cells.
Wednesday, September 18, 2024
A Song of Ice and Fire
Wednesday, August 14, 2024
Does Autophagy slow aging and slow the progression of neuro-degenerative diseases ?
Sunday, June 16, 2024
A Natalist, Nativist, Nationalist Case for the Child Tax Credit
Thursday, March 07, 2024
Avatars of the Tortoise III
He concluded ""We (the indivisible divinity that works in us) have dreamed the world. We have dreamed it resistant, mysterious, visible, ubiquitous in space and firm in time, but we have allowed slight, and eternal, bits of the irrational to form part of its architecture so as to know that it is false."
I think I might have something interesting to say about that and I tried to write it here. If you must waste your time reading this blog, read that one not this one. But going from the sublime to the ridiculous I have been a twit (but honestly not a troll) on Twitter. I said I thought we don't need the concept of a derivative (in the simple case of a scalar function of a scalar the limit as delta x goes to zero of ther ratio delta Y over delta x - I insult you with the definition just to be able to write that my tweet got very very ratiod).
In avatars of the Tortoise II I argued that we can consider space time to be a finite set of points with each point in space the same little distance from its nearest neighbors and each unit of time the same discrete jump littleT from the most recent past. If I am right we don't need derivatives to analyse functions of space, there are just slopes, or time, or position as a function of time (velocity and acceleration and such). In such a model, there are only slopes as any series which goes to zero, gets to zero after a finite number of steps and the formula for a derivative must include 0/0.
I will make more arguments against derivatives. First I will say that we learn nothing useful if we know the first, second, ... nth ... derivative of a function at X. Second I will argue that we can do what we do with derivatives using slopes. Third I will argue that current actual applied math consists almost entirely of numerical simulations on computers which are finite state autometa and which do not, in fact, handle continuums when doing the simulating. They take tiny little steps (just as I propose).
I am going to make things simple (because I don't type so good and plane ascii formulas are a pain). I will consider scalar functions of scalars (so the derivative will be AB ap calculus). I will also consider only derivatives at zero.
f'(0) = limit as x goes to zero of (f(x)-f(0))/x
that is, for any positive epsilon there is a positive delta so small that if |x|
To go back to Avatars of the tortoise I, another equally valid definition of a derivative at zero is, consider the infinite series X_t = (-0.5)^t.
f'(0) = the limit as t goes to infinity of (f(x_t)-f(0))/x_t that is, for any positive epsilon there is a positive N so big that, if t>N then
|f'(0) - (f(x_t)-f(0))/x_t|< epsilon.
so we have again the limit as t goes to infinity and the large enough N with know way of knowing if the n which interests us (say 10^1000) is large enough. Knowing the limit tells us nothing about the billionth element. The exact same number is the billionth element of a large infinity of series some of which converge to A for any real number A, so any number is as valid an asymptotic approximation as any other, so none is valid.
Now very often the second to last step of the derivation of a derivative includes an explicit formula for f'(0) - (f(x)-f(0))/x and then the last step consists of proving it goes to zero by finding a small enough delta as a function of epsilon. That formula right near the end is useful. Te derivative is not. Knowing that there is a delta is not useful if we have no idea how small it must be.
In general for any delta no matter how small for any epsilon no matter how small, there is a function f such that |f'(0) - (f(delta)-f(0))/delta|>1/epsilon (I will give an example soon). for any function there is a delta does not imply that there is a delta which works for any function. The second would be useful. The first is not always useful.
One might consider the first, second, ... nth derivatives and an nth order Taylor series approximation which I will call TaylorN(x)
for any N no matter how big, for any delta no matter how small for any epsilon no matter how small, there is a function f such that |TaylorN(delta) - f(delta)|>1/epsilon
for example consider the function f such that
f(0) = 0, if x is not zero f(x) = (2e/epsilon)e^(-(delta^2/x^2))
f(delta) = 2/epsilon > 1/epsilon.
f'(0) is the limit as x goes to zero of
-(2e/epsilon)(2delta^2/x^3)e^(-(delta^2/x^2)) = 0.
the nth derivative the limit as x goes to zero of an n+2 order polynomial times e^(-(delta^2/x^2)) and so equals zero.
The Nth order taylor series approximation of f(x) equals zero for every x.
for x = delta it is off by 2/epsilon > 1/epsilon.
There is no distance from zero so small and no error so big that there is no example in which the Nth order Taylor series approximation is definitely not off by a larger error at that distance.
Knowing all the derivatives at zero, we know nothing about f at any particular x other than zero. Again for any function, for any epsilon, there is a delta, but there isn't a delta for any function. Knowing all the derivatives tells us nothing about how small that delta must be, so nothing we can use.
So if things are so bad, why does ordinary caluclus work so well ? It works for a (large) subset of problems. People have learned about them and how to recognise them either numerically or with actual experiments or empirical observtions. But that succesful effort involved numerical calculations (that is arithmetic not calculus) or experiments or observations. It is definitely Not a mathematical result that the math we use works. Indeed there are counterexamples (of which I presented just one).
part 2 of 3 (not infinite even if it seems that way but 3. If the world is observationally equivalent to a word with a finite set of times and places, then everything in physics is a slope. More generally, we can do what we do with derivatives and such stuff with discrete steps and slopes. We know this because that is what we do when faced with hard problems without closed form solutions. We hand them over to computers which consider a finite set of numbers with a smallest step.
and that quickly gets me to part 3 of 3 (finally). One person on Twitter says we need to use derivatives etc to figure out how to write the numerical programs we actually use in applications. This is an odd claim. I can read (some) source code (OK barely source code literate as I am old but some). I can write (some) higher higher language source code. I can force myself to think in some (simple higher higher language) source code (although in practice I use derivatives and such like). Unpleasant but not impossible.
Someone else says we use derivatives to know if the simulation converges or, say, if a dynamical system has a steady state which is a sink or stuff like that. We do, but tehre is no theorem that this is a valid approach and there are counterexamples (basically based on the super simple one I presented). All that about dynamics is about *local* dynamics and is valid if you start out close enough and there is no general way to know how close is close enough. In practice people have found cases where linear and Taylor series (and numerical) approximations work and other cases where they don't (consider chaotic dynamical systems with positive Lyaponoff exponents and no I will not define any of those terms).
Always the invalid pretend pure math is tested with numerical simulations or experiments or observations. People learn when it works and tell other people about the (many) cases where it works and those other people forget the history and pour contempt on me on Twitter.
Avatars of the Tortoise II
He concluded ""We (the indivisible divinity that works in us) have dreamed the world. We have dreamed it resistant, mysterious, visible, ubiquitous in space and firm in time, but we have allowed slight, and eternal, bits of the irrational to form part of its architecture so as to know that it is false."
I think rather that we have dreamed of infinity which has no necessary role in describing the objective universe which is "resistant, mysterious, visible, ubiquitous in space and firm in time*".
First the currently favored theory is that space is not, at the moment, infinite but rather is a finite hypersphere. There was a possibility that time might end in a singularity as it began, but the current view is that the universe will expand forever. Bummer. I note however that the 2nd law of thermodynamics implies that life will not last forever (asymptotically we will "all" be dead, "we" referring to living things not currently living people). So I claim that there is a T so large that predictions of what happens after T can never be tested (as there will be nothing left that can test predictions.
However it is still arguable (by Blake) that we can find infinity in a grain of sand and eternity in an hour. Indeed when Blake wrote, that was the general view of phyicists (philosophy makes the oddest bedfellows) as time was assumed to be a continuum with infinitely many distinct instants in an hour.
Since then physicists have changed their mind -- the key word above was "distinct" which I will also call "distinguishable" (and I dare the most pedantice pedant (who knows who he is) to challenge my interchanging the two words which consist of different letters).
The current view is that (delta T)(delta E) = h/(4 pi) where delta T is the uncertainty in time of an event, delta E is the uncertainty in energy involved, h is Planck's constant, pi is the ratio of the circumpherance of a circle to it's diameter and damnit you know what 4 means.
delta E must be less that Mc^2 where M is the (believed to be finite) mass of the observable universe. So there is a minimum delta T which I will call littleT. A universe in which time is continuous (and an hour contains an infinity of instants) is observationally equivalent to a universe in which time (from the big bang) is a natural number times littleT. The time from the big bang to T can be modeled as a finite number of discrete steps just as well as it can be modeled as a continuum of real numbers. This means that the question of which if these hypothetical possibilities time really is is a metaphysical question not a scientific question.
Now about that grain of sand. there is another formula
(delta X)(delta P) = h/(4 pi)
X is the location of something, P is its momentum. |P| and therefore delta P is less than or equal to Mc/2 where M is the mass of the observable universe. The 2 appears because total momentum is zero. This means that there is a minimum delta X and a model in which space is a latice consisting of a dimension zero, countable set of separated points is observationally equivalent to the standard model in which space is a 3 dimensional real manifold. Again the question of what space really is is metaphysical not scientific.
Recall that space is generally believed to be finite (currently a finite hypersphere). It is expanding. At T it will be really really big, but still finite. That means the countable subset of the 3 dimensional manifold model implies a finite number of different places. No infinity in the observablee universe let alone in a grain of sand
There are other things than energy, time, space and momentum. I am pretty sure they can be modeled as finite sets too (boy am I leading with my chin there).
I think there is a model with a finite set of times and of places which is observationally equivalent to the standard model and, therefore, just as scientifically valid. except for metaphysics and theology, I think we have no need for infinity. I think it is not avatars of the tortoise all the way down.
*note not ubiquitous in time as there wass a singularity some time ago.
Wednesday, March 06, 2024
Asymtotically we'll all be dead II
Asymptotically we'll all be dead didn't get much of a response, so I am writing a simpler post about infinite series (which is the second in a series of posts which will not be infinite. First some literature "Avatars of the Tortoise" is a brilliant essay by Jorge Luis Borges on paradoxes and infinity. Looking at an idea, or metaphor (I dare not type meme) over centuries was one of his favorite activities. In this case, it was alleged paradoxess based on infinity. He wrote "There is a concept which corrupts and upsets all others. I refer not to Evil, whose limited realm is that of ethics; I refer to the infinite."
When I first read "Aavatars of the Tortoise" I was shocked that the brilliant Borges took Zeno's non paradox seriously. The alleged paradox is based on the incorrect assumpton that a sum of an infitite numbr of intervals of time adds up to forever. In fact, infite sums can be finite numbers, but Zeno didn't understand that.
Zeno's story is (roughly translated and with updated units of measurement)
consider the fleet footed Achilles on the start line and a slow tortoise 100 meters ahead of him. Achilles can run 100 meters in 10 seconds. The tortoise crawls forward one tenth as fast. The start gun goes off. In 10 seconds Achilles reaches the point where the tortoise started by the tortoise has crawled 10 meters (this would only happen if the tortoise were a male chasing a female or a female testing the male's fitness by running away - they can go pretty fast when they are horny).
So the race continues to step 2. Achilles reaches the point where the tortoise was after 10 seconds in one more second, but the tortoise has crawled a meter.
Step 3, Achilles runs another meter in 0.1 seconds, but the tortoise has crawled 10 cm.
The time until Achilles passes the tortoise is an infinite sum. Silly Zeno decided that this means that Achilles never passes the tortoise, that the time until he passes him is infinite. In fact a sum of infinitely many numbers can be finite -- in theis case 10/(1-0.1) = 100/9 < infinity.
Now infinite sums can play nasty tricks. Consider a series x_t t going from 1 to infinity. If the series converges to x, but does not converge absolutely (so sum |x_t| goes to infinity) then one can make the series converge to any number at all by changing the order in which the terms are added. How can this be given the axiom that addition is commutative. Now that's a bit of a paradox.
the proof is simple, let's make it converget to A. DIrst note that the positive terms must add to infinity and the negative terms add to - infinity (so that they cancel enough for the series to converge).
now add positive terms until Sumsofar >A (if A is negative this requires 0 terms). Now add negative terms until sumsofar
That's one of the weird things infinity does. I think that everything which is strongly counterintuitive in math has infinity hiding somewhere (no counterexamples have come to my mind and I have looked for one for decades).
Now I say that the limit of a series (original series of sum t = 1 1 to T) as T goes to infinity is not, in general, of any practical use, because in the long run we will all be dead. I quote from "asymptotically we'll all be dead"
Consider a simple "problem of a series of numbers X_t (not stochastic just determistic numbers). Let's say we are interested in X_1000. What does knowing that the limit of X_t as t goes to infinity is 0 tell us about X_1000 ? Obvioiusly nothing. I can take a series and replace X_1000 with any number at all without changing the limit as t goes to infinity.
Also not only does the advice "use an asymptotic approximation" often lead one astray, it also doesn't actually lead one. The approach is to imaging a series of numbers such that X_1000 is the desired number and then look at the limit as t goes to infinity. The problem is that the same number X_1000 is the 1000th element of a of an uh large infinity of different series. one can make up a series such that the limit is 0 or 10 or pi or anything. the advice "think of the limit as t goes to infinity of an imaginary series with a limit that you just made up" is as valid an argument that X_1000 is approximately zero as it is that X_1000 is pi, that is it is an obviously totally invalid argument.
An example is a series whose first google (10^100) elements aree one google so x_1000000 = 10^100, and the laters elements are zero. The series converges to zero. If one usees the limit as t goes to infinity as an approximation when thinking of X_999 then one concludes that 10^100 is approximately zero.
The point is that the claim that a series goes to x s the claim that (for that particular series) for any positive epsilon, there is an N so large that
if t>N then |x_t-x}
Knowing only the limit as t goes to infinity, we have no idea how large an N is needed for any epsilon, so we have no idea if the limit is a useful approximation to anything we will see if we read the series for a billion years.
Now often the proof of the limit contains a useful assertion towards the end of the proof. For example one might prove that |x_t-X| < A/t for some A. The next step is to note that the limit as to goes to infinity of x_t is X. This last step is a step in a very bad direction going from something useful to a useless implication of the useful statement.
Knowing A we know that N = floor(A/epsilon). That's a result we can use. It isn't as elegant as saying something about limits (because it includes the messy A and often includes a formula much messier than A/t). However, unlike knowing the limit as t goes to infinity it might be useful some time in the next trillion years.
In practice limits are used when it seems clear from a (finite) lot of calculations that they are good approximations. But that means one can just do the many but finite number of calculations and not bother with limits or infinity at all.
In this non-infinite series of posts, I will argue that the concept of infinity causes all sorts of fun puzzles,but is not actually needed to describe the universe in which we find ourselves.
Monday, February 19, 2024
Asymptotically We'll all be Dead
I assert that asymptotic theory and asymptotic approximations have nothing useful to contribute to the study of statistics. I therefore reject that vast bul of mathematical statistics as absolutely worthless.
To stress the positive, I think useful work is done with numerical simulations -- monte carlos in which pseudo data are generated with psudo random number generators and extremely specific assumptions about data generating processes, statistics are calculated, then the process is repeated at least 10,000 times and the pseudo experimental distribution is examined. A problem is that computers only understand simple precise instructions. This means that the Monte Carlo results hold only for very specific (clearly false) assumptions about the data generating process. The approach used to deal with this is to make a variety of extremely specific assumptions and consider the set of distributions of the statistic which result.
I think this approach is useful and I think that mathematical statisticians all agree. They all do this. Often there is a long (often difficult) analysis of asymptotics, then the question of whether the results are at all relevant to the data sets which are actually used, then an answer to that question based on monte carlo simulations. This is the approach of the top experts on asymptotics (eg Hal White and PCB Phillips).
I see no point for the section of asymptotic analysis which no one trusts and note that the simulations often show that the asymptotic approximations are not useful at all. I think they are there for show. Simulating is easy (many many people can program a computer to do a simulation). Asymptotic theory is hard. One shows one is smart by doing asymptotic theory which one does not trust and which is not trustworthy. This reminds me of economic theory (most of which I consider totally pointless).
OK so now against asymptotics. I will attempt to explain what is done -- I will use the two simplest examples. In each case, there is an assumed data generating process and a sample size of data (hence N) which one imagines is generated. Then a statistic is estimated (often this is a function of the sample size and an estimate of a parameter of a parametric class of possible data generating processes). The statistic is modified by a function of the sample size (N). The result is a series of random variables (or one could say a series of distributions of random variables). The function of the sample size N is chosen so that the series of random variables converges in distribution to a random variable (convergence in distribution is convergence of the cumulative distribution function at all points where there are no atoms so it is continuous).
One set of examples (usually described differently) are laws of large numbers. A very simple law of large numbers assumes that the data generating process is a series of independent random numbers with identical distributions (iid). It is assumed (in the simplest case) that this distribution has a finite mean and a finite variance. The statisitc is the sample average. as N goes to invinity it converges to a degenerate distribution with all weight on the population average. It is also tru that the sample average converges to a distrubiton whith mean equal to the population mean and variance going to zero - that is for any positive epsilon there is an N1 so large that the variance of the sample mean is less than epsilon (conergence in quadratic rule). Also for any positive epsilon there is an N1 so large that if the sample size N>N1 then the probability that the sample mean is more than epsilon from the population mean is itself less than epsilon (convergence in probability). The problem is that there is no way to know what N! is. In particular, it depends on the underlying distribution. The population variance can be estimated using the sample variance. This is a consistent estimate so that the difference is less than epsilon if N>N2. The problem is that there is no way of knowing what N2 is.
Another very commonly used asymptotic approximation is the central limit theorem. Again I consider a very simple case of an iid random variable with a mean M and a finite variance V.
In that case (sample mean -M)N^0.5 will converge in distribution to a normal with mean zero and variance V. Again there is no way to know what the required N1 is. for some iid sitributions (say binary 1 or zero with probability 0.5 each or uniform from 0 to 1) N1 is quite low and the distribution looks just like a normal distribution for a sample size around 30. For others the distribution is not approximately normal for a sample size of 1,000,000,000.
I have criticisms of asymptotic analysis as such. The main one is that N has not gone to infinity. Also we are not imortal and my not live long enough to collect N1 observations.
Consider an even simpler problem of a series of numbers X_t (not stochastic just determistic numbers). Let's say we are interested in X_1000. What does knowing that the limit of X_t as t goes to infinity is 0 tell us about X_1000 ? Obvioiusly nothing. I can take a series and replace X_1000 with any number at all without changing the limit as t goes to infinity.
Also not only does the advice "use an asymptotic approximation" often lead one astray, it also doesn't actually lead one. The approach is to imaging a series of numbers such that X_1000 is the desired number and then look at the limit as t goes to infinity. The problem is that the same number X_1000 is the 1000th element of a of an uh large infinity of different series. one can make up a series such that the limit is 0 or 10 or pi or anything. the advice "think of the limit as t goes to infinity of an imaginary series with a limit that you just made up" is as valid an argument that X_1000 is approximately zero as it is that X_1000 is pi, that is it is an obviously totally invalide argument.
This is a very simple example, however there is the exact same problem with actual published asymptotic approximations. The distribution of the statistic for the actual sample size is one element of a very large infinity of possible series of distributions. Equally valid asymptotic analysis can imply completely different assertions about the distribution of the statistic for the actual sample size. As they can't both be valid and they are equally valid, they both have zero validity.
An example. Consider a random walk where x_t = (rho)x_(t-1) + epsilon_t where epsilon is n iid random variable with mean zero and finite variance. There is a standard result that if rho is less than 1 then (rhohat-rho)N^0.5 has a normal distribution. There is a not so standard result that if rho = 1 then (rhohat-rho)N^0.5 goes to a degenerate distribution equal to zero with probability one and (rhohat-rho)N goes to a strange distribution called a unit root distribution (with the expected value of (rhohat-rho)N less than 0.
Once I came late to a lecture on this and casually wrote converges in distribution to a normal with mean zero and variance before noticing that the usual answer was not correct in this case. The professor was the very brilliant Chris Vavanaugh who was one of the first two people to prove the result described below (and was not the first to publish).
Doeg Elmendorf who wasn't even in the class and is very very smart (and later head of the CBO) asked how there can be such a discontinuity at rho +1 when, for a sample of a thousand observations, there is almost no difference in the joint probability distribution or any possible statistic between rho = 1 and rho = 1 - 10^(-100). Prof Cavanaugh said that was his next topic.
The problem is misuse of asymptotics (or according to me use of asymptotics). Note that the question explicity referred to a sample size of 1000 not a sample size going to infinity.
So if rho = 0.999999 = 1 - 1/(a million) then rho^1000 is about 1 but rho^10000000000 iis about zero. taking N to infinity implies that, for a rho very slightly less than one, almost all of the regression coefficients of X_t2 on X_t1 (with t1
One of them is a series where Rho varies with the sample size N so Rho_N = 1-0.001/N
for N = 1000 rho = 0.999999 so the distribution of rhohat for the sample of 1000 is just the same as before. However the series of random variables (rhohat-rho)N^0.5 does not converge to a normal distribution -- it converges to a degenerate distribution which is 0 with probability 1.
In contrast (rho-rhohat)N converges to a unit root distribution for this completely different series of random variables which has the exact same distribution for the sample size of 1000.
There are two completely different equally valid asymptotic approximations.
So Cavanough decided which to trust by running simulations and said his new asymptotic aproximation worked well for large finite samples and the standard one was totally wrong.
See what happened ? Asymptotic theory did not answer the question at all. The conclusion (which is universally accepted) was based on simulations.
This is standard practice. I promise that mathematical statisticians will go on and on about asymptotics then check whether the approximation is valid using a simulation.
I see no reason not to cut out the asymptotic middle man.
Wednesday, October 25, 2023
Ex Vivo culturing of NK cells and infusion
Thursday, October 19, 2023
CAR T-Cell III
It would be best to induce central memory T-cells rather than effector memory T-cells, but I think memory phenotype of either type might do.
convertible CARs
This second post is a semi-crazy idea about making off the shelf CAR T-cells rather than modifying cells from the patient. The cost of the patient specific therapy is not prohibitive even now and should go down the learning curve. However, my proposal of multiple modifications would add to the cost and why do them again and again ?
So the idea is to make a CAR T-cell line which will not be rejected by the patient even though the CAR T-cells are made with someone else's T-cells with different surface antigents especially different HLA antigens. Long ago my late father thought of deleting the Beta 2 microglobulin gene so that HLA A B and C would not be expressed on the surface. Here I make a much more radical proposal (which will never be allowed so it is just for a blog post)
The off the shelf CAR can be designed to express the do not kill me signal PDL1. As I already proposed that the receptor PD1 be deleted, these cells will not tell each other not to kill. I think that these cells could be infused into anyone and would function. They would also be dangerous - if some became leukemic dealing with them would have to include anti PDL1. Recall that I propose inserting Herpes TK into the super CAR T-cells so that they can be killed, if necessary, with gangcyclovir. That would be even more clearly needed with the PDL1 expressing super CAR T-cells.
Wednesday, October 18, 2023
Hot ROd CARs
This approach has been very successful in treating Leukemia, but not so successful in treating solid tumors -- the tumor micro environment is not hospitable to killer T-cells. There are a large number of known aspects of the tumor micro-environment which tend to protect tumors from activated killer T-cells
1) Perhaps the most important is myeloid derived suppressor cells -- these are immature granualicytes and macrophages which are attracted to the tumor. Among other things, they produce anti-inflamatory IL-10, and also produce the free radical Nitric Oxide (NO).
2) Tumor inflitrating T-regs which produce and display anti inflammatory TGF beta.
3) Cancer cells display checkpoint "don't kill me signals" including PDL1 and CTLA4 ligand.
4) There are generally low Oxygen, low glucose, low Ph, and high lactic acid levels.
Many of the issues involve specific interaction with specific receptors on the t-cells (eg PD1, CTLA4, IL10 receptor, TGF beta receptor). I think that, since one is already genetically modifying the t-cells, one can also delete those receptors so they do not respond to the anti-inflamatory signals. The NO issue is different -- it is a non specific oxidizing agent. I think here one can make cells which always produce the antioxidant response by deletign KEAP1 which inactivates NRF2 which triggers the anti oxidant response.
So I think it is possible to produce souped up CARs which invade solid tumors.
There is a potential risk of putting killer t-cells which can't be regulated into a patient, so I would also insert the gene for herpes TK so they can be specifically killed by gancyclovir.
This approach makes sense to me. It involves a whole lot of work aiming at a possible future approval of a clinical trial. I can see why it hasn't been done (and will have another post about reducing the cost and effort involved) but I think it makes sense to try.
Monday, October 02, 2023
MMLF Founding Manifesto
The liberation of such mosquitoes is one way to fight malaria. They (and similarly modified members of other species of anopheles mosquitoes) can eliminate malaria.
However they can't do that imprisoned in lab cages. They are not released because of who ? WHO. It is agreed that the important and allegedly for some reason risky decision must be made after careful thorough consideration and that release occur only when all affected countries (which are numerous as mosquitoes don't respect international boundaries) agree.
That is probably roughly never and certainly not until there have been millions more un-necessary deaths.
I think the modified mosquitoes should be liberated using any means necessary.
Saturday, March 19, 2022
Elisabeth, Essex, and Liberty Valence in Lammermoor
Thursday, March 04, 2021
Dr Seuss & Brain Washing
Sunday, February 28, 2021
Who to be mad at
by hand
Friday, December 18, 2020
So Far the efficacy data has been presented. As reported in the press earlier, the vaccine is roughly 95% effective, that is roughly 95% of people who got Covid 19 during the trial were participants who received the placebo.
Importantly, the null hypothesis that just one dose is just as good as two was not rejected. The test of this null had extremely low power as almost all participants received both doses, so basically this means cases less than 4 weeks after the first dose. However, note the extreme rigidity of the FDA.
Before allowing vaccination, the FDA required proof of efficacy. Before allowing a modification from two doses 4 weeks apart to one dose, the FDA requires … I don’t know maybe if Jesus Christ returned and petitioned them for some flexibility, they would give Him a hearing, but I guess they would tell him he needed to propose (and fund) a new Phase III trial.
It is also true that there is no evidence of benefit from the second dose of Pfizer’s vaccine. It is clear that people who have received one dose of either vaccine are among those least at risk of Covid 19.
The vaccines are in very short supply. People are anxiously waiting for vaccination. Because the protocol had two doses, half of the vaccine will be reserved for the people who will benefit least.
Here there is a difference between careful science and optimal policy. In science it is crucial to write the protocol first then follow it mechanically. This is necessary so that the experimental interventions are exogenous and one can be sure they cause the observed outcomes and are not caused by observations.
However, it is not optimal policy to reduce the possible decisions to two, a priori with extremely limited data. This is what the FDA does. I think they should approve a single dose. Their rule is always to only act on extremely firm knowledge. It is, in this case, not going to be first do no harm. The second dose has side effects (mild but not zero). There is, I think, no evidence of benefits. (Again, the test has extremely low power (and I’m not sure protocol did not say the question would be addressed — if it didn’t then there is a problem — the rule decide what to do in advance applies to data analysis too — it is vital that the data not be dredged looking for a significant coefficient)). I think the point estimate is pretty much exactly zero benefit.
I think that people should be given a single dose. After everyone who wants one dose has been vaccinated, then it makes sense to give people a second dose. There is no reason to think spacing 4 weeks apart is optimal — the spacing was decided in advance.
Next speaker discussed safety. There is 0 evidence that vaccination increases the risk of anaphalactic shock. There were two cases one person who suffered anaphalaxis received placebo and one received the vaccine. The most common side effect was pain. There were no cases of severe side effects. People with a history of anaphalaxis were *not* excluded from the study.
Now a third speaker argues for unblinding the study and giving the vaccine to participants who were given the placebo. They can drop out and just get the vaccine when it is their turn. Losing the control group is not ideal but attrition will make it useless soon anyway (people will not settle for 50% chance they were vaccinated when the vaccine is approved — probably tomorrow). I agree, they have enough data and it is not ethical to leave people unvaccinated just as a control group.
Now they open for discussion with a few members of the public allowed to ask questions (the law requires this). I muted. Now they have taken a pause.
My question is why not give people just one dose until everyone who wants it has been vaccinated once ? I see no basis at all for allocating the scarce vaccine to a second dose. The scientific method does not say that optimal policy requires sticking to a protocol written before data were collected. The first do no harm principle (which I absolutely oppose in general) would imply giving one dose until there is evidence of benefit of a second dose.
Consider the case of tests for Covid 19. The test kits sent out by the CDC contained powder in tubes. One tube was the positive control — it was supposed to containt DNA with sequences corresponding to the Sars Cov2 RNA genome sequences. The tubes which were supposed to contain one of 3 oligonucleotides to be used. was contaminated with traces of that DNA. The result was that the kit as shipped reported that distilled water was infected with Sars Cov2. The hospital labs which got the kits almost immediated figured out that they could test with valid results if they didn’t use the material in the contaminated tubes, and just used 2 oligonucleotides. They could determine who had Covid 19 using the kit. But that was a modified protocol which was not FDA approved, so the FDA did not allow them to do this. The FDA also did not approve dozens of tests which were developed by the private sector.
Here the FDAs decision that they would rather be safe than sorry kept the US blind to Covid for … I think maybe a couple of weeks. Don’t look, because you haven’t proven that your glasses have exactly the right prescription is not good advice to someone on a highway. This was a very bad problem. I think the lesson learned is not that even the CDC lab sometimes makes mistakes. It was that rigidity and refusing permission is not the way to safetly.
Since then, I have been very favorably impressed by the FDAs efforts. But today I want more — I mean less — I mean approving less and allowing more flexibility. I see no case for insisting on giving people second doses with almost exactly zero evidence of efficacy. I see no case for reserving vaccine for the people who are least at risk of Covid 19. Yet I see no chance that a single dosage will be allowed.
Usual rant
In previous posts, I object to the confusion of the pure food and drug act with the scientific method. I note that it is simply a mistake to assert that the null hypothesis is to be treated as true until it is rejected by the data. The law says drugs are assumed ineffective until they are proved effective. That is US law not the scientific method. In general the decision of which of 2 hypotheses to treat as the null is arbitrary and should have no implications. I am not a scientist, but I am familiar with the Neyman Pearson framework and I consider my claims about the meaning of “null hypothesis” to be as solid as my assessment of 2+2. Both are simple math
Friday, October 02, 2020
Constitutional Nit Picking
In fact, we can blame the delegates at the Constitutional Convention (as well as the 7th Congress) for that particular offence against Democracy. Back in 1800, The Constitution Article II Section 1 included "But in chusing the President, the Votes shall be taken by States, the Representation from each State having one Vote;"
The one state one vote rule does appear in the 12th Amendment, but it was already in the original Constitution.
A more important point is that this is only relevant if there is a 269-269 tie in the electoral college. The 12th amendment also says " The person having the greatest number of votes for President, shall be the President, if such number be a majority of the whole number of Electors appointed;" Notice "Electors appointed" not "More than the number of states plus half the number of representatives" or currently more than 269.
It is (still in spite of everything) inconceivable that the race be called before it was agreed who won the tipping point state, but if it is decided that a President elect must be declared while the winner of some state is contested, the matter will not go to the House voting one state one vote (as always results must be certified by the House voting the normal way one representative one vote).
It has not always been true that all states are represented in the electoral college. It hasn't always been true in my lifetime (I was born on November 9 1960 the day after electors were elected November 8 1960 but before those electors Kennedy). In 1960 the electors for Hawaii were never assigned because the outcome was contested when the electors voted. This means that Hawaii had to wait until 1964 to be represented in the electoral college after becoming a state on August 21, 1959.
Tuesday, May 26, 2020
Hobbes and Hegel
Hence the question, what do Hobbes and Hegel have in common ? I admit I know a bit about Hobbes having read the first two books of Leviathan (and I bet Hobbes's mom was too bored to read the third and fourth). About Hegel I know almost exactly nothing (and more than I would like).
They are two seminal influential writers. The vibrant discussion and debate about the social contract began with Hegel largely transmitted through Locke's attempt to refute Hobbes in his second treatise on government (I have read it but not his first treatise on government).
In each case, most people addicted to the big H's do not share their conclusions or general orientation. Some phrases and words live on (social contract, dialectic, historical age) while the orginal main point is utterly rejected.
The interesting thing is that these two genuinely revolutionary writers were reactionary. Both advocated absolute monarchy. Hobbes explicitly rejected not only the British revolution, but also the ancien regime with power divided between the King and Parliament. He regretted the defeats at Naseby and Runnymeade, he contested both Cromwell and Polybius. He slashed at the division of power as sharply as Ockham.
Hegel was not so clear (the military situation has developed not necessarily to Japan's advantage). He claimed to believe that Prussia should have a constitution, and that Prussia had a constitution.
I just had an idea. I think the extroirdinarily original thoughts are the result of attempting to defend the indefensible. There were a few obsolete arguments for absolute monarchy. The first was might makes right, which was challenged by facts on the ground. The second was the divine right of kings which was hampered by the contrast between God's stubborn silence and theologians' verbosity. Something new was needed and first the social contract then the dialectic were new. I think the radicalism of Hobbes was made necessary by his extremely reactionary factionalism. I think the extreme abstraction and vagueness of Hegel [should be discussed only by people who have actually read Hegel] was a new obscurantism needed because people had ceased to look to scripture for guidance on public policy (people starting with Hobbes).
Necessity is the mother of invention, and the painful and humiliating need to find some way to defend the pretenses of a royal patron was the mother of genius.