This video explains how to calculate the free energy of a system and uses log likelihood ratio, resolution map, correlation function, and partition functions to prove Main Formula One. It also explains how a Mellin transform and its inverse are used to calculate the density of states and how a Taylor expansion and boundary defining function can be used to calculate the zeta function. This video is a great resource for anyone looking to learn the fundamentals of free energy and thermodynamics.

A log likelihood ratio, known as f, is used to calculate an empirical version of the k function and the free energy. A resolution map is used to re-parameterize the set where k is less than or equal to epsilon, and a real analytic monitor, m, is used to calculate the free energy. The correlation function is given by expectation over the axu function, and a variable is defined which is the difference between kn and its mean. To prove Main Formula One, the parameter space needs to be a compact set of r to the d and f extending to a holomorphic function on a larger set.

Partition functions are defined by two foundation functions, n of c and v, which are indexed by essential coordinates satisfying the h i plus one over two k condition and non-essential coordinates with a quantity larger than lambda. Singular integrals are bounded by a leading term of log n to n minus 1 over n to the lambda and represented as a sum over local charts. The integral is then equal to e to the n times a quantity composed of g of x y, with the final result being m beta multiplied by 2 k y to the 2 k prime minus m beta multiplied by a second term.

This theorem examines the singular integral of a smooth function c and a positive value v. The expression of y n c v is equal to a complicated integral involving constants and leading order behavior. The partition function is bounded above by a constant c greater than zero, which is one order of logs smaller than m minus two. The proof is split into two parts, with one part using the triangle inequality to zero in on the zero set and pull out the leading order term. This term is equal to the positive partition function, with the property of the Dirac delta distribution used to introduce the density of state integral.

A theorem using a Mellin transform and its inverse is used to calculate the density of states, giving an expression that includes a Gamma function, a n to the lambda term, and an integral from 0 to n y to the two, prime e to the two mod k. The same bound is derived by changing the variables, resulting in a log n to the n minus one term. The integral is split into two, and the second integral is bounded by constants. A trick is used to reduce the expression to an integral of a heaviside step function and a direct distribution, and a theorem is used to integrate the essential coordinates, resulting in an expression for the integral in terms of a new RCT times log n b to the 2 k over t to the l minus one, giving a bound on the function v.

The integral of a function can be bounded by a logarithmic function under certain assumptions, which can be used to resolve the function, its analytic part and boundary defining functions simultaneously with an epsilon greater than zero. Compactness of an analytic manifold can be resolved by decomposing it into a finite set of local charts of the form 0b to the d. This is possible if the manifold is semi-analytic, and the boundary of the manifold can be thought of as being chopped off. Resolution of the pies means that the boundary fits into this description neatly and the proof does not really care.

In theorem 6.7, the convergence in law of random variables is proven by dividing the integral into two parts: one close to the zero set and one away from the zero set. By definition of the c n random variable, the inequality kow > epsilon is replaced with a penalty over inequality and the negative term is bounded by a squared b square. This allows for the proof to be completed by factoring out the leading order behavior and collecting all the results together. Theorem 4.9 states that after dividing out the leading term, the constants derived before can be put into this equation multiplied by v plus gradient of kw to the s times the characteristic function of w, extending the proof of Atiya's division of distribution. However, there is a gap in the proof, as the boundary defining function could contribute to the k due to other multi-indices.

Zeta function is a continuous function between function spaces with respect to a supremum norm which converges to a random variable. Theorem 6.6 states that it can be analytically continued to a metamorphic function on all of C. This is done by taking the integration over W times the V of WdW, where W is a compact set. Taylor expansion is used to isolate the constant order term, which is necessary for the proof of the free energy formula. The boundary defining function and the prior need to be neo-analytic, and a variant of the integral sign over w is what is actually always meant.

A log likelihood ratio, known as f, is used to calculate an empirical version of the k function. The normalized evidence, or partition function, is then calculated by integrating over the prior throughout the parameter space. The goal is to prove an asymptotic expansion of the free energy, which states that it is equal to lambda log n times n minus one log log n term and plus a random variable. This is done by introducing a resolution map, g, which re-parameterizes the set where k is less than or equal to epsilon. The resolution map then uses a real analytic monitor, m, to calculate the free energy.

F is a real analytic function of u and k g of u is normal crossing. To factorize the function, a variable is defined which is the difference between kn and its mean. This sequence of random variables converges to a Gaussian process on m and has mean zero by construction. Its correlation function is given by expectation over the axu function. To prove Main Formula One, a few conditions are needed, such as the parameter space being a compact set of r to the d and f extending to a holomorphic function on a larger set.

Partition functions are defined as two foundation functions, n of c and v, where v is greater than zero and is equal to 0. Essential coordinates are indexed with i, and are defined by a constant lambda, which is equal to h i minus 1 over 2 k i. Non-essential coordinates are indexed with j, and have a quantity larger than lambda. Resolution with singularity is applied to a real analytic set, which is contained in an open set called w to the c. A series of bounds are applied to singular integrals, which is used as an ingredient for the proof.

In order to evaluate a partition function, the singular integral is bounded by a leading term of log n to n minus 1 over n to the lambda. This can be rearranged into essential and non-essential components, with the essential ones satisfying the h i plus one over two k condition. The quantity to be evaluated is then represented as a sum over local charts, with each chart composed of a resolution and multiplied by the Jacobian of the chart. The integral is then equal to e to the n times a quantity composed of g of x y, with the final result being m beta multiplied by 2 k y to the 2 k prime minus m beta multiplied by a second term.

A finite number of charts can be used to cover a compact manifold. A target is to see if a singular integral is equal to a finite sum. If this is true, the essential coordinates will have leading order behavior and a constant, independent of the chart, plus a log n to the m alpha minus m over n to the lambda alpha minus lambda r alpha n will be factored out. To converge, either lambda must be smaller than lambda alpha for o alpha or m must be bigger than m alpha for o alpha. Negative log of the expression of both will be taken.

In this theorem, a smooth function c and a positive value v are defined. The expression y of n c v is equal to a complicated integral involving constants and leading order behavior. The value c of 0 and v naught of y are both zeroed out to look near the zero set. This theorem isolates the essential part of the singular integral for a particular partition function.

The partition function is bounded above by a constant c greater than zero, which is one order of logs smaller than m minus two. The proof is split into two parts, with one part using the triangle inequality to zero in on the zero set and pull out the leading order term. This term is equal to the positive partition function, with the property of the Dirac delta distribution used to introduce the density of state integral.

In order to calculate the density of states, a theorem from before is invoked which uses a Mellin transform and its inverse to evaluate the integral. This gives an expression which includes a Gamma function, a n to the lambda term and an integral from 0 to n y to the two, prime e to the two mod k. The absolute value of k prime is then used to bound the expression. A slight of hand is used to derive the same bound from last time, by changing the variable x to x on b and y to y on b, which introduces a b to the mod to k term.

Log n to the n minus one is the highest order log n term in the binomial expansion. This is collected into a sum of indices from one to n minus one of n minus one choose j of the same constant, except with the log n exponent decreased by j. The integral is split into two, one from 0 to n and one from n to infinity. The second integral has the asymptotic behavior desired and is bounded by constants.

In order to find a bound on a function v, the speaker uses a trick to reduce the expression to an integral of a heaviside step function and a direct distribution. The speaker then uses a theorem to integrate the essential coordinates, and finds an expression for the integral in terms of a new RCT times log n b to the 2 k over t to the l minus one. This allows them to get a bound on v itself.

Under certain assumptions, the integral of a function can be bounded by a logarithmic function. This result can be used to resolve a function, its analytic part and boundary defining functions simultaneously with an epsilon greater than zero. This is done using resolution of similarity and partition of unity.

Compactness of an analytic manifold can be resolved by decomposing it into a finite set of local charts of the form 0b to the d. This is justified by the fact that every point in the manifold has a neighbourhood of charts that can be decomposed into a union of the 0b to the d charts. This is only possible if the manifold is semi-analytic, and this is an unsolved question. The boundary of the manifold can be thought of as being chopped off, and being at the boundary just means that instead of having another side to one of the coordinates, it just has something like a line. The resolution of the pies means that the boundary fits into this description neatly and the proof does not really care.

In theorem 6.7, the convergence in law of random variables is proven by dividing the integral into two parts: one close to the zero set and one away from the zero set. As n goes to infinity, the part close to the zero set is irrelevant. By definition of the c n random variable, the inequality kow > epsilon is replaced with a penalty over inequality and the negative term is bounded by a squared b square. This allows for the proof to be completed by factoring out the leading order behavior and collecting all the results together.

Summarising, an integral expression is equal to the sum of essential parts of the singular integral for each local chart. Semi-analyticity is needed to factor into the boundary, as essential coordinate charts intersect the boundary. The term converges in probability to zero, as the remaining terms converge to a random variable. Furthermore, the convergence is uniform.

Theorem 4.9 states that after dividing out the leading term, there is still a one log remaining. This can be used in essential charts and the constants derived before can be put into this equation multiplied by v plus gradient of kw to the s times the characteristic function of w. This extends the proof of Atiya's division of distribution, which assumes semi-analyticity of w. However, there is a gap in the proof, as the boundary defining function could contribute to the k due to other multi-indices.

Zeta function is a continuous function between function spaces with respect to a supremum norm which converges to a random variable. Sigma of the partition in unity can be disregarded by taking the sum over a finite set of charts which converges to a Gaussian process. Theorem 6.6 states that the zeta function can be analytically continued to a metamorphic function on all of C. This is done by taking the integration over W times the V of WdW, where W is a compact set.

The speaker is discussing a theorem about ameromorphic functions, which can be Taylor expanded and the constant order term can be isolated. This is necessary for the proof of the free energy formula, but it is unclear why the characteristic function should affect the output. It is also unclear why the analytic functions used to define w should matter, as any set of analytic functions should cut out the same set if they are asked to be non-negative. There is a remark which justifies why the boundary defining function and the prior need to be neo-analytic, but it is not understood. It is suggested that the integral sign over w is too naive and it is implied that a variant of it is what is actually always meant.

all right uh oh yeah oh yep go ahead yeah i was distant because the microphone was coming through the other machine so that's no okay okay cool all right um that's that so um so i'll go with the proof name formula two and this is man formula two uh given a a true density uh did a model uh parametrized by w and a prior on the parameter space uh we have the following definition so f throughout this talk will be uh equal to the log likelihood ratio which is a log of the ratio between the truth and the model and k n of w is um is the empirical version of the um of the k function the average expected like likelihood ratio um that we'll be mainly studying um which is just the empirical average of over the data set of f um and our normalized evidence or the normalized partition function or to normalize uh marginal likelihood as many names for this thing um it's kind of the main quantity we are interested in which uh it's and we have introduced this before and it's equal to um uh e to the negative n k n of w um and integrate over the prior throughout the parameter space and our free energy or in this case the normalized free energy is negative negative log of the normalized evidence and our goal is to prove the following asymptotic expansion which says that the free energy is equal to lambda log n times n minus one log log n term and plus a random variable and this sequence of random variable converges in distribution to um to a random variable so this is saying that um this term here is uh order one uh but it's uh it's a other one in terms of random variable and that means uh well a sequence of random variable is um order um f n for some uh for some function of n f if and only if um when you divide it by f n uh it's less than it's bounded above by a well in ordinary asymptotics it's bothered both by a constant but this is bounded opposed by a random variable question random variable all right so that's our goal here and we're going to we're going to state this uh a lot more carefully uh in a bit so previously uh we have proven main formula one which will be our last ingredient needed um to uh to to to prove main formula two um and main formula one says that there exists a resolution map g and drought g is going to be the resolution map of um such that uh we re-parametrize uh w epsilon which is uh which is the set where k is less than or equal to epsilon for some epsilon um and we re-parameterize that set with a real analytic monitor m such that in every local chart u of m

um we can factorize the function f like that ax u is is a real analytic function of u and k g of u is normal crossing and we can define this variable which is the uh which is the difference between kn and its mean so if you normalize it properly um outside of the zero set of k this is going to be converging to a normal distribution but it is not well defined near the zero set of k which is why we need to resolve it and once we resolve this using g then this quantity is defined on o of n right and we have proven this property which is uh this this process has mean zero by construction and its uh correlation function is given by um by expectation over just the x of the axu function we have defined above and we haven't quite proven this in fact there's a lot of convergence a random variable that we haven't proven but and that's that requires some background in convert we convergence of measures um and so on um so we we're just going to take it as um as fact that this sequence of random variable converts to a gaussian process on m so this is like the function space version of central limit theorem okay uh two let's move over to the second board uh just just to recall that uh to prove main formula one we uh we needed a few conditions and this is called from the fundamental conditions one in the gray book and i'm going to review this because um because we're going to use from the main formula one in two to proof number two these uh these conditions are uh assume as well and they are going to be upgraded in a moment um so in fundamental condition one we assume that our parameter spec uh parameter space is a compact set uh of r to the d um so this is a schematic picture here so um out of the b is represented by a line and w is a compact set of that line um i i draw here as if uh this w is um uh has has none empty interior but i'm not sure if it is needed in main formula one but it is certainly needed in main formula two um right but um the the point is somewhat mute uh when we look at other conditions uh so if w naught w naught is our zero set of k uh there and the second uh so if the second condition says that the um the f function extend to a holometric function on a larger set well a hormone function is defined on an open domain and it's holomorphic and it's complex and that means that it means that we are talking about uh complex uh domain at the moment so uh that's a little bit weird so it's saying that that function f of x can be extended so meaning in the taylor series uh sorry in

the power series expansion of f you can put in um complex variable and it still makes sense so it's extending it to c to d and [Music] and we call that set w to the c and it's an open set right i should say that there is also a corresponding w to the r that could that contain this real analytic set um and w c contains wcr right uh the the same picture happens near uh when we localize near the zero set so basically there is a w epsilon um like that and contained in a w to the r epsilon which is itself contained in a w to the c uh w to c epsilon okay um right so that's the condition on f and we needed uh kw that uh expected look like a ratio to be reanalytic because we are going to apply resolution with singularity to this function um and w naught it's zeros that is assumed to be non-empty and that's the realizability assumption here um i think in the future we're going to discuss how to extend it extend this to the unrealizable case right and there's this technical condition which is just needed for the convergence of the cn variable this is uh this is needed to just needed to prove that uh the variance of cn on the zero set is uh is constant right um okay so that's that's why we have um how we need it to prove our main formula one and uh mean formula is going to be an ingredient for our proof today and the other ingredient is the the series of bounds on something that we have been looking at which is which are a series of singular integrals and i'm going to remind us of what those are so um so singular integral um and and we we have bounds for them uh that we have proven previously so um we have investigated something called uh a partition function which is which is just defined as follow so that uh n of c and v and c and v are both um foundation functions um [Music] and v is greater than zero and that was defined to be 0 and so recall that x we partition our set of coordinates into uh two parts one part the x part is what we will call the essential ones and the y are the non-essential ones the essential ones essentially means that um in the following running that's the 2k watch the 2k prime the essential oil is actually means that that there is a lambda which is equal to h i minus 1 over 2 k i um and that is a constant that that equality holds for constant lambda for o i from 1 to m for all the essential coordinates and the non-essential ones um have this quantity larger than lambda so let me index them with j and those are the prime coordinates and corresponds to the y's

for j equal to 1 to s and m m plus s is equal to our dimension here all right um and we had a bound for this which is saying that it is um it's it's leading order it's equal to some constants times log n its leading term is the following and its leading order is log n to n minus 1 over n to lambda okay by the way throughout this norm is going to be the supreme the supremum norm so over over the domain of the function um and mean of v so we have the sandwich inequality for this singular integral okay i'm running out of face space so that is also sorry i mean that is less than equal to some other constant of with the same leading behavior times a different constant um okay um so let's talk about how we are going to how we plan to use this couple of ingredients so let's move to the next part okay the plan the plan is to use main theorem 1 so that in every local chart we um in every local chart that intersects the the the zero set we um we can uh so in imam from the one we have the code in the u right and we are going to rearrange them so that we can um rearrange them into the essential and non-essential where uh the essential ones satisfy the satisfies the uh condition uh both about their um the h i plus one over two k right and then um and then we and then in those local charts we have um k composed with the resolution give us um x to the 2 k y to the 2 k prime um and v composed with a chart give us actually and multiply by the jacobian of the chart is equal to v of x y extra h right to the h prime um and this v function is going to be bounded away from zero and smooth um and then um we have the quantity the quantity that we needed to evaluate our partition function uh our our free energy which is kn so when that is composed with g um we get x to the 2 k y to the 2 k prime uh minus 1 on root n x to the k y to the k prime of x y right so um so z and not the the thing that we want to evaluate um can now be represented as using a partition of unity this the integral of um our normalized evidence can be represented as a sum over local charts m alpha are local charts in m of e to the n and so uh it's an integration where it's equal to e to the n times this quantity and we are going to substitute so n beta k n are composed of g of x y right that's that's the quantity that we want um so that looks like uh m beta multiplied by 2 k y to the 2 k prime minus m beta multiplied by the second term which gives us beta root n x to the k y to the k prime c n of x y and then we have uh

the prior which after compose with the resolution we get um that so together with the jacobian so that's um that's the integral that we are going to investigate uh now another crucial point is that we can cover our um [Music] our manifold and by a finite final number of charts because of uh compactness so this is what this is a finite sum so alpha is a finite index right sorry and i forgot our partition of unity there is a sigma alpha of x y in here so that's our partition of unity um so that's uh greater than zero and smooth okay so um we want to use our bounds on integral like that so our target is to see whether or not this is equal to uh the same finite sum uh over our uh integral our singular integral before where we substitute it uh where we just pass and match and it's equal to cn and multiply by v and then we seek bound bounds on that so if we have this um let let me go to the next part if we can justify that and we can continue with the following but this is going to be a sloppy proof to illustrate the idea um the idea is that if we have that representation in in in every local coordinates we can um separate out uh the z ends integral that achieve some highest order behavior and we are going to call those essential coordinates and record the bound on this singular integral and they are equal to um something of this form n to the lambda times sum um remaining uh time sum remain remainder which let me index by the charts plus the remaining charts which have different uh leading order behavior and we will factor out this log n n minus 1 10 to the lambda and that's equal that gives us so the first term just become one big sum so so that is a constant independent of the chart that term that multiplicative term is independent of the chart plus well so that this thing after factoring out the leading order we get uh log n to the m alpha minus m um over n to the lambda alpha minus lambda r alpha n so the claim that um that the essential charts have leading order behavior just means that this thing uh converged so uh decayed to zero meaning either either lambda is greater than lambda alpha for o alpha um or if they are equal we need uh m to be greater than m alpha sorry sorry uh we so either lambda is smaller than lambda alpha for o alpha or we need m to be bigger than m alpha for o alpha right uh for for this uh convergence to hold right and then we will take um we take negative log of the expression of both um uh that's equal to lambda log n minus m minus one

log log n uh plus um uh i'm going to factor out uh the rns um here which gives us a negative log of rn plus negative log of 1 plus um well something that decays to zero improbability because these are um because of because of this effect here well so so this is going to be so we need to prove that this term is actually um uh order one and we we are going to prove that this term is um well is is of load order less than one but actually we were going to prove that this is of all the log one on log n so uh the the worst behavior here is just that uh lambda is equal to lambda alpha and then um and then uh m is uh yeah and then uh m is greater than uh m alpha but only by one right um so to do how to do is to um so isolate essential um terms so highest order terms the second thing is to we need to justify this uh this partition of some partition of the parameter space um into integrals of the form that we have bounds for which is justify um the the in the mr charts we can we can have charts that is equal to um the partition into essential and non-essential coordinates that we want um and then we need to show that remaining term is op1 okay so that's how to do all right prepare for a lot of calculation i'm moving back to the first okay so we will start with um theorem 4.9 of the gradebook so the purpose of this theorem is isolate essential part not for all charts but just for each for each single integral right so the theorem says that um if we have c the uh smooth function and v is greater than zero actually in it only needed to be c1 but um that's uh uh let us zoom over here then we can define y of n c v to be equal to the following complicated expression but the idea is that it is kind of the central part of the z of the singular integral here so it's equal to some constant times leading order behavior by the way m n lambda has um has the same interpretation throughout in terms of um the to the the exponents of x to the k and x to the two k prime um and x to the h and y to the h prime so uh the case k prime h and have h prime not indices are all given as uh for a particular partition function so that's equal to this integral okay uh but what uh v naught where v naught um c naught sorry x oops c naught of y um is defined to be c of 0 and v naught of y is the same so we zero out the essential coordinates which is same as looking looking near the zero set um [Music] but then well but okay i'm not too sure about the interpretation here so but we are definitely looking near the zero set

but then we want to integrate out um the the y coordinates the the non-essential ones um okay um so then the theorem says that the partition function so the theorem says that uh y hold the leading order leading order behavior for the partition function so then there exists um there exists a constant c greater than zero such that this is um less this is bounded above by and this this is the crucial part this is bounded both by m minus two so it's one order smaller so one uh order of logs smaller okay um the constant term here is going to be complicated um and that's because of how the proof is constructed right so these are all constants depending on the particular functions we have we needed to keep these constants because uh later on when we use it these sorry we needed to keep this this sum because later on we don't need to substitute it for different functions for different charts okay the proof in the interest of time i think i'm going to show kind of the essential part of this proof my notes have more details i'm going to show part of this proof because this proof is separated out into two parts um two two different integral about uh two different bounds um so by triangle inequality um this um uh the term on the left hand side is bounded by so we are going to add and subtract that and right so there are two effects uh one effect is to um uh is is to um zero in on the um on the set where x is zero and then the other effect is uh is that you can pull out this uh you can pull out this um leading order term and uh yeah you can pull out this leading order term from after you um zero in on the zero set so so this inequality by the triangle by the triangle inequality the above is smaller than um these two terms right um and we are going to um discuss just this term um so this term is a different bound um and and i think uh the the details in the grade book is very sparse for the for the first one and there are several typos and i think there is a modification needed um uh we can discuss this um after afterwards if you're interested but let's look at this term all right so let's write out that term that and so that integral well it's equal to um it's equal to positive partition function but you're substituting c naught and v naught but we are going to use the same trick as before and uh try to introduce our density of state integral into in into the expression by the property of the dirac delta distribution so that's equal to t x 2 k y to the 2 k prime and because of

the property of the direct distribution um whenever we see um whenever we see that we are we're going to replace it by t that's a very handwavy way of saying it but um it can be made precise by the property of the distribution okay so so we get that expression um and we are going to invoke a theorem that we have proven before so theorem 4.6 um which is which is the um the the trick with um uh mailing transform and then inverse meline transform uh so mailing transform uh give us um the zeta function and then that is particularly easy to evaluate when we have normal crossing as we do at the moment and then we invest nearly transform together the the following expression for the density of states that we want um we use that trick to evaluate the um the dx part of the integral so integrate out the x and that gives us gamma b which is just some constant and the n to the lambda falls out of the theorem and the dt integral integral goes from zero to um zero to n y to the two okay prime e to the two mod k so that's that that comes from the fact that uh that that comes from the expression here and integral of zero b so the x has been integrated out we are left with the y integral so t to the lambda minus one um e to the negative b t so those things those the exponential term are not affected because they only depend on y um uh i skipped ahead a little bit so let me not do that um times so that inverse main transform gave us a log term as well so n to the beta to the n minus one vo zero y right so uh to to bound this expression um and you can see that um the uh this this y expression is it's almost uh almost isolated yeah i lost you there what's this superscript which is absolute value of k what so sorry absolute value of k prime uh so so k prime are multi indices right so not uh absolute value of naughty not in this is just sum them up all right okay yeah i that uh so i admit there's a bit of a slight of hand so uh last time i only proved uh the the bounce for the partition function for uh where the integration domain is zero one not zero to the b um uh so i'm a little bit a bit tricky here because um so we can do we can derive the same bound from uh last time with a multiplication of a constant but i change a variable for x to x to one of b to the y at one on b times y and so so x becomes sorry x becomes x on b and y becomes y on b um that substitution will just modify the constant of the front and then it introduced this very annoying um b to the mod to k um yeah

concerned up different yeah um okay um okay so so uh we if you want to isolate the y uh the linear behavior uh we are going to write this log as log n uh minus log of that remaining b to the 2 k y to the k over t term and that is o to the n minus 1 and we are going to binomial expand this uh to isolate leading behaviors okay uh second board okay so we now have um z of n c v is um yep the equality still holds is equal to um gamma b to the so i'm going to collect all the m minus 1 terms log n to the n i minus 1 terms so in the binomial expression expansion the first term is uh log n to the n minus one and that's due to k prime b to the two mod k prime dt um i'm going to write out this one last time all right so that's that's the first term in the binomial expansion with the with the highest order log n term and then the rest of the term is collected into uh so sum of the indices from instead of going from zero to n minus one it goes from uh one to n minus one um of n minus one choose j of the same constant except now the the the log n exponent is always decreased by j which is a non-zero number enter lambda and you get the same integral dt and then you get a dy integral uh with the same expression uh except there is a um there is a dot there is a log uh so the the second term in the binomial expansion t and that's the to the summation variable of the binomial expansion um okay right um this term here is almost uh y is almost a central part except the integration domain is uh not correct so we are going to split this integration domain from which is from 0 to n 0 to this number into 0 to infinity minus that number by the way if if anyone is keeping track this i should really write these two integral in the in a different order because um uh because y is uh the the inner integral depends on uh the value of y so uh two uh to infinity right and and and this zero to infinity integral is exactly the ncb expression that we want and we just uh so we have written z as y and c3 plus sum that one integral and that two integral so it remains to show that um this two integral has the asymptotic behavior that we want right so let's go ahead and bound those um okay the z1 integral is so the z1 integral is just this expression um except the integration is from uh this number to infinity so it's it's always larger than this number um so that is less than so the constants of the front are not affected and and to to handle this this term in the um exponent that uh we will use the same

trick we used before which is uh mod a b is less than equal to half a squared plus b squared for any real number um and absorb that into the first term with the penalty of um uh c-square appearing and then we are going to take just the supreme of that with uh with the penalty of our inequality so that reduced to uh e to the a down to c naught supremum of that um and the same we will going to you do the same trick for just take the supremum of the fee and so now we are using the boundedness of those two functions okay and that's equal to zero to infinity uh um okay so um right uh so the remaining term so so the integration domain is from uh it is bigger than uh that that n y to the 2 k b to the 2 2 mod k prime number but we are going to um integrate from 0 to infinity but um use that function the heavy side step function to enforce that integration domain and there is a remaining edge to lambda k prime um that right okay um then this is um if we call this uh y integral which is dependent which is the function of t v of t then we note that by the um by the property by the defining almost defining property of um the direct distribution uh if we differentiate v prime rv that will give us that that changes the uh the heavy side step function into the direct distribution okay to the dot um and then we are going to use the same theorem we used before which is the um which is the uh which which allowed us to integrate the x part the essential part uh before which is uh uh looking at the density of states and realizing that it is the uh inverse main transform of um of a zeta function that is easy to evaluate because of uh normal crossing um but then we just need to be careful about which part is essential right now because we we don't have x coordinates anymore we only have y's but then even among the y's they are um the they are h prime minus one over two k prime that is the lowest so just call those the essential coordinates um and then um uh and then look at via rlct in in a sense so calculate that calculate the minimum and calculate the multiplicity uh give them some name then we we know that uh uh uh we know and then and then we have uh an expression from this for this integral um in terms of um in terms of t to the uh t on n to the sum delta which is our new new rct times log n b to the 2 k over t to the um to some other uh multiplicity let's call that l uh minus one right um and then to to get a bound on v itself and not just um uh uh not just that it's it's derivative we

just um use the use the fundamental theorem of calculus and integrate it uh express v of t as an integration of its derivative and uh integrate by parts um give you give us uh this bound but the important part is we we get an n to the negative n to the negative delta out um by the way this this bound here change to s um with a penalty over inequality uh details to be worked out right so the important thing is the n to the negative delta comes out and which means that the this in this integral can be bounded by uh log n n minus 1 n to the lambda plus delta which is a non-zero number and um and then copy down the same thing okay that's uh that's that's our first bound uh okay uh in the interest of time i'm going to is it okay then if i skip the uh the zap too bound um it's uh it's it's not it's it's really easy yeah sure whatever you think is best okay um so let's move the next part and uh so assume that so once we bound that to that to bound uh we get we get the theorem which is the central part for the partition uh the singular integral okay then the next part is uh theorem 6.5 in the gray book and it's it's it's basically reiterating a lot of things a lot of results that we have seen before but now done in local charts so so we are going to use resolution of similarity and then use partition of unity to potentially partition out our compact parameter space but here's we make new assumptions so assume that w is semi-analytic i'll under parameter space lead and that means that w is cut out by a finite set of real analytic function so w is equal to points in our d such that pi j's x are greater than zero for j equal to one to some p where the pi j's are really analytic on well on some open set of artillery and we also need our prior to be neo-analytic and that means that it has uh it can be vectorized out into a smooth part and a analytic part so smooth and greater than zero and that's greater than equal to zero and analytic um i will push the discussion of this to the end so the theorem says that under this assumption together with fundamental condition 1 before we can for any epsilon greater than zero we can simultaneously resolve um the k function that we needed to resolve and we resolve the analytic part of our prior and the boundary defining functions for w epsilon which are the following so so so these are so this is the boundary defining functions for w and together with this uh this is boundary defining function for w epsilon right um because k is itself analytic as as we

have assumed so uh we simultaneously resolve this this series of functions and we get a resolution and resolving um w epsilon and then m is covered by a finite set of local charts m alpha of the form zero b to the d so the justification for this is that um uh every uh point p in the analytic multiple uh has uh has a neighborhood of negative has a has a chart of deformed negative b to b to the d um and then that is that can be decomposed into a union of charts of the from zero b to the d so there's um so uh say in 2d there are one two three four uh uh one one two three four um parts that that one two three four quadrants are in 3d there are eight um so there are two to the number of these things so um and uh we do this for each point uh in the chart and that becomes an open cover and by compactness we have we have a sub cover and that subcover can be decomposed into two d such uh chart and that's how we get our m alphas so um i i this is this is kind of a justification level this is not a proof and i think i think this is where we require uh semi-analytic city of um of of w because um i think this this can only uh so this construction only is only justified at near the boundary if w is semi-analytic so this is a remaining unsolved question where does the the fact that we've resolved the pies really enter into things though that's that's my question that that is exactly um my my my problem with this um i think the justification uh i think it has to be i mean if you allow me to draw on the board briefly i think somehow we want we have these coordinate charts in which we're doing these arguments and you know this is something like this one up here is all these coordinates u greater than or equal to zero and and in our head we kind of want to think of the boundary as being something like oh well i just chopped this off or something right yeah i think this this this argument here sorry sorry gone i was just gonna say that if we resolve the pies then indeed we can basically just assume that picture is true i guess right it means that the boundary just fits into this kind of description very neatly so being at the boundary just means that instead of having another side to one of those coordinates you just have something like this which i suppose that's what you're about to point out um which is a very good and then the boundary is not a big deal it's just kind of where one of these things happens to not have another side and it's not an issue and the proof probably doesn't really care

um yeah so i think i think i'm going to uh keep going with that with the result and i think there is one point where the proof should care about this i could uh yeah um okay yeah yeah so so basically that's that's that's all that is also my intuition of um so basically this this open set does does not happen on the boundary right yeah um and that that is a gap in logic um in the discussion of this theorem um okay and uh and the remaining of this theorem just say that in each chart m alphas we have the normal crossing uh that we need um and then we can sub in and then we are going to call um charts um where the coordinates of the the newton two case and um uh and the h are multi-indices uh that attain the minimum the global minimum the rlct the global rrct to be the essential coordinates okay um i am going to use an export to hopefully finish up collecting all the results together and finish up the proof so this is theorem 6.7 and it says that if we collect all the results we have before we can prove the following convergence in convergence in law of random variables so alpha star is going to be all the essential charts meaning the diamond disease kh obtain the minimum possible lambda rct so which is meaning the global velocity then we have a convergence so if we factorize out the leading order behavior the remaining parts converge to a random variable right and the the c naught and v naught are defined uh in the same in the same way so uh those are those are that where x and y's are the uh x and y together consists uh uh consists of u uh make make u and x is the essential part y is magnitude so the proof is that so we are going to divide the integral of that out into two parts which is a part in doubly epsilon but the part close to the zero set and the part away from the zero set thanks for erasing that then for someone right um so we are going to first see that as n goes to infinity this part is irrelevant so that's um let's do that so in k of w greater than epsilon um we can use um so by definition of the c n random variable we have before um this is this is by definition of c n um by the way i think this is a typo in the grade book if you are reading it uh the written is omitted and uh by the same trick about a b is less than or equal to half a squared b square again or cauchy schwartz you can derive this inequality and because kow is always greater than epsilon we can uh replace it with replace k with epsilon with a penalty over inequality and make the negative term

even more negative right and the crucial thing um in in this argument is that this term we know that cn uh converges to c uh in law as long as we are away from uh away from the zero set and and furthermore by uh clevenko uh currently we know that this convergence is actually uniform um so meaning meaning the soup um over uh in the whole domain of this converges as a random variable right so we have um so for the set one um sorry i'm let's call that that one actually to make consistent that is actually that too um for the z2n um integral um that is um if you substitute these terms into the expression that you get so e to the negative beta n beta epsilon over 2 times e to the beta over 2 supremum of the cn square okay so that converges in probability to zero uh because this term um sorry uh this term cues everything else because everything else converges to a random variable um and e to the negative n is greater than and lambda is greater than that right so this term is irrelevant um so what what's relevant is um the first term um let's go back to the first board and finish up and okay the first integral is equal to um sum over alpha so so uh the first integral is the everything less than is in w epsilon so everything wherever k is less than or equal to epsilon and we have uh we have from the previous theorem we have shown that that can be uh resolved and in local in local coordinates that become this expression right and we are going to isolate the essential charts as was the idea before and to define that as and and factor out the leading behavior as well and that's equal to uh well the expression about the essential part of the singular integral uh we have before uh but for but but for each each local chart um and i think i think this is where the semi analysis analyticity is uh is needed because uh to be able to write this for every local chart the sum of the charts actually hits uh the boundary right and and and these expressions we have been using uh are valid for um for for all kinds uh for all the uh local chart that looks like that uh sorry to to the d essentially actually these are essential coordinates so we do we do need them to factor into that to the s um and and i think uh what remains concerning is what happened when uh when the zero set sort of intersects the boundary and uh so so coordinate charts that intersect the boundary um are essential coordinate charts um and and i think that's where uh seminar electricity is needed but uh uh anyway uh that's equal to

this expression so i don't so why is this this expression here also say it's needed you say it's needed and i agree but do you think there's a gap here or it's fine uh i i'm not sure in in in my list of discussion uh items it seems like so in between these two theorem that i'm proving here uh in the gradebook there's an actually another theorem 26.6 that that shows that if w is semi-analytic then the zeta function can be uh can be continued to a meromorphic function and that is very much parallel to atiya's um division of distribution proof and in that the seminal analyticity is also assumed um and it and and it's used in some essential way so i i i think there is a gap but i don't know what that is so for now for this theorem for what i'm writing here so we have to have the extra assumption that w0 is not intersecting the boundary sure yeah okay um so recall that uh y also have uh this term so this cancel out uh the log n to the n minus one divided by n to the lambda and that's the remaining term for uh for for why uh i'm not going to write it out um so we are going to use the theorem that we have proven uh earlier which is just isolating out the central part of of y and and that showed that in uh in every local chart so just to clarify so you would be safe if w0 doesn't meet the boundary at all but all you really need is that no essential coordinate has its sort of zero point at the boundary rate as long as the essential coordinates are away you don't care about the others is that right um i think i'm still a bit wary about that because what could happen is that the boundary defining function itself um so so in the in the in the rts proof uh in rts proof we have kw to the s times the times the characteristic function uh of of w uh extends metamorphically uh the thing is that this thing can contribute can contribute you to the k due to the some uh other multi indices and that might you know make uh make it into an essential coordinate like right so because previously we have been defining essential coordinates purely based on uh behavior the the normal crossing form of k only so this interaction concerns me yeah okay yeah wow all right yeah so uh so the theme 4.9 says that we can uh everything after after dividing out the leading term uh this is there is still a one login remaining and so that those are some of the essential charts and we are we're going to put the constants that we have derived before into this ends there nope multiplied by v plus gradient of

that plus that okay so in in in all of this the sigma of the partition in unity can be disregarded by another inequality because you can just take the soup over the finite set of um charts um and um cn converges to a gaussian process um on on off m uh so it converts your random converges to a random variable uh hence um this whole thing converges to a to a random variable and hence this converges to zero in probability actually converges to zero improbability uh because the log n q is everything um and and and i'm going to say yeah and therefore uh uh that note sorry that and not uh uh converges um to what converges to the to to the essential part to the z1 part and converges to the to that time y of c so there's another um important information in here which is why is why is continuous in terms of a continuous as a function between function space with respect to a supreme norm and hence uh convert and convergence weak convergence so convergence in law is preserved by continuous function so uh we can take the limit inside and replace the end with c here and and that proves the theorem and then taking log gave us um the main formula too sorry i'm massively over time don't apologize that was great thank you right um so there are some remaining things to discuss which is um which is what i said before like where and how our seminars analyticity of w is used uh so before we only assume w is compact so it can be actually be lower dimensional but clearly that's that's forbidden because um we we needed to extend the f function to a holomorphic function on on an open set in c to the d so so the dimension is not dropping there um and then we need to i need to i still need to understand uh why the metamorphic continuation of zeta is uh required and that's intimately connected to atheist uh paper um sorry so that again sorry so sir so theorem 6.6 um which is a theorem that whatever proof between these last two theorems that i proved i don't know why it is required it seems to me that it is not required but um so that proved that that that theorem says that the zeta function um analytic continues to a metamorphic function on on all c and uh so by the way let me write that function that you know in a in a way so zeta of that is equal to uh integration over all of out the d um uh k of w to the z times the boundary times the characteristic function of of w and that's that's where it reduced to integration to over w times the v of the w d w right um so the the the content of that function is that

this thing barters theorem uh distinct analytically content continues to ameromorphic functions and then this thing can be taylor expanded um and we can isolate uh the the constant order term and that preserve the rlct that like uh so you have u to the uh so you have u due to 2k times uh u to the h up times some constant term um right uh and and this whole thing therefore this whole thing underneath it continues to america function uh i i don't know why that is needed in in the proof for the free energy formula and and i don't know how the uh this this uh characteristic the the uh the pose of that uh characteristic function could affect um output yeah it's very it's very hard to understand because why should the function itself matter at all right i mean clearly this set of zeros of gamma or the places where it's greater than or equal to zero i mean it's suppose suppose my gamma is uh well i don't know x cubed x to the fifth x to the seventh or something so that the that series of functions will cut out the same set if i ask them to be non-negative right but the higher powers are more singular so if they were to enter into this calculation they i suppose each one of them should have a different effect naively um but i don't know if when i'm defining w i don't care which one i pick it's just a set right so certainly w is a variety that is w equations matters suddenly for some reason that i don't understand so i mean in many places here i could swap out one set of another semi-analytic well one set of analytic functions to define w with any other set right yep but then somewhere i'm not allowed to do that anymore i find this very hard to tell where suddenly it matters that i'm considering w not as just a set but really as a variety semi-analytic variety in its own right yeah also there is a remark uh what is that again remark uh 6.6 that that justify why we need the the pies the boundary defining function and also justify why uh we need the prior to be neo-analytic and it's about um it's about the behavior of the zeta function like it it might become essential singularity or uh or it has no poles i i don't understand so i don't understand that remark at all so so clearly some thoughts have been has been omitted um in supposedly yeah i suppose it must be somehow that just writing the integral sign over w is too naive right uh yeah yeah i think so and then what we actually always mean is this thing you just wrote down or some variant of it where we integrate over