微積分 Calculus

Mathematics   Yoshio    Sep 8th, 2023 at 8:00 PM    8    0   

微積分 Calculus

Calculus


Fundamental Theorem of Calculus

Integration

\(g(t)=\int_a^tf(\tau)\operatorname d\tau,\;(a\leq t\leq b)\)

\(\dot g(t)=f(t)\)




\(\dot g(t)=\lim_{\triangle t\rightarrow0}\frac{g(t+\triangle t)-g(t)}{\triangle t}\)

\(=\lim_{\triangle t\rightarrow0}\frac{f(t)\triangle t}{\triangle t}=f(t)\)


Antiderivative

\(\int_a^bf(\tau)\operatorname d\tau=h(b)-h(a)\)

\(h(t)=\int_\alpha^tf(\tau)\operatorname d\tau\)

\(\dot h(t)=f(t)\)

\(h(b)-h(a)=\int_\alpha^bf(\tau)\operatorname d\tau-\int_\alpha^af(\tau)\operatorname d\tau=\int_a^bf(\tau)\operatorname d\tau\)

Ex

\(g(t)=\int_a^tf(\tau,t)\operatorname d\tau\)

\(\dot g(t)=f(t,t)+\int_a^t\frac\partial{\partial t}\int f(\tau,t)\operatorname d\tau\)


\(f(\tau,t)=t^2+2\tau t\)

\(g(t)=\int_{-3}^t(t^2+2\tau t)\operatorname d\tau\)

\(g(t)=\left.(t^2\tau+t\tau^2)\right|_{-3}^t\)

\(=2t^3+3t^2-9t\)

\(\dot g(t)=6t^2+6t-9\)

(a) \(f(t,t)=t^2+2t^2=3t^2\)

\(\frac\partial{\partial t}f(\tau,t)=2t+2\tau\)

(b) \(\int_{-3}^t\frac\partial{\partial t}f(\tau,t)\operatorname d\tau=\int_{-3}^t(2t+2\tau)\operatorname d\tau\)

\(=\left.(2t\tau+\tau^2)\right|_{-3}^t=2t^2+t^2+6t-9\)

(a)+(b) \(\dot g(t)=6t^2+6t-9\)


Derivatives Functions Rules

The Sum and Difference Rules

\( \frac d{dx}\left[f\left(x\right)+g\left(x\right)\right]=\frac d{dx}f\left(x\right)+\frac d{dx}g\left(x\right)\)

\( \frac d{dx}\left[f\left(x\right)-g\left(x\right)\right]=\frac d{dx}f\left(x\right)-\frac d{dx}g\left(x\right)\)


\(h\left(x\right)=f\left(x\right)+g\left(x\right)\)


\(\frac d{dx}h\left(x\right)\)

\(=\lim\limits_{\triangle x\rightarrow\infty}\frac{h\left(x+\triangle x\right)-h\left(x\right)}{\triangle x}\)

\(=\lim\limits_{\triangle x\rightarrow\infty}\frac{\left[f\left(x+\triangle x\right)+g\left(x+\triangle x\right)\right]-\left[f\left(x\right)+g\left(x\right)\right]}{\triangle x}\)

\(=\lim\limits_{\triangle x\rightarrow\infty}\frac{\left[f\left(x+\triangle x\right)-f\left(x\right)\right]+\left[g\left(x+\triangle x\right)-g\left(x\right)\right]}{\triangle x}\)

\(=\lim\limits_{\triangle x\rightarrow\infty}\frac{f\left(x+\triangle x\right)-f\left(x\right)}{\triangle x}+\lim\limits_{\triangle x\rightarrow\infty}\frac{g\left(x+\triangle x\right)-g\left(x\right)}{\triangle x}\)

\(=\frac d{dx}f\left(x\right)+\frac d{dx}g\left(x\right)\)


The Product Rule

\(\frac d{dx}\left[f\left(x\right)\cdot g\left(x\right)\right]=g\left(x\right)\cdot\frac d{dx}f\left(x\right)+f\left(x\right)\cdot\frac d{dx}g\left(x\right)\)


The Quotient Rule

\(D{\textstyle\left({\displaystyle\textstyle\frac fg}\right)}=\frac{g\cdot D\left(f\right)-f\cdot D\left(g\right)}{g^2}\)


\(D{\textstyle\left({\displaystyle\textstyle\frac fg}\right)}\)

\(=D\left(f\cdot g^{-1}\right)\)

\(=f\cdot D\left(g^{-1}\right)+D\left(f\right)\cdot g^{-1}\)

\(=-f\cdot g^{-2}\cdot D\left(g\right)+D\left(f\right)\cdot g^{-1}\)

\(=-f\cdot g^{-2}\cdot D\left(g\right)+D\left(f\right)\cdot g\cdot g^{-2}\)

\(=\frac{g\cdot D\left(f\right)-f\cdot D\left(g\right)}{g^2} \)


The Chain Rule

\[\frac{dy}{dx}=\frac{dy}{du}{}\frac{du}{dx}\]


Suppose that we have two functions \(f(x)\) and \(g(x)\) and they are both differentiable.

If we define \(F(x) = (f∘g)(x)\) then the derivative of \(F(x)\) is,

\[F^{\prime}{(x)}=f^{\prime}{({g{(x)}})}{}g^{\prime}{(x)}\]

If we have \(y = f(u)\) and \(u = g(x)\) then the derivative of \(y\) is,

\[\frac{dy}{dx}=\frac{dy}{du}{}\frac{du}{dx}\]


Partial Integration

\(\frac d{dx}\lbrack f(x)\cdot g(x)\rbrack\;=\;g(x)\cdot\frac d{dx}f(x)+f(x)\cdot\frac d{dx}g(x)\)

\(\int\left\{\frac d{dx}\lbrack f(x)\cdot g(x)\rbrack\right\}\;dx\)

\(=\int\lbrack g(x)\cdot\frac d{dx}f(x)+f(x)\cdot\frac d{dx}g(x)\rbrack\;dx\)

\(=\int\lbrack g(x)\cdot\frac d{dx}f(x)\rbrack dx+\int\lbrack f(x)\cdot\frac d{dx}g(x)\rbrack dx\)


\(f(x)\cdot g(x)=\int\lbrack g(x)dx\rbrack\frac d{dx}f(x)+\int\lbrack f(x)dx\rbrack\frac d{dx}g(x)\)

\(f\cdot g\;=\;\int g\cdot\operatorname df+\int f\cdot\operatorname dg\)

\(\int f\cdot\operatorname dg\;=f\cdot g-\;\int g\cdot\operatorname df\)


\(\int u\operatorname dv=uv-\int v\operatorname du\)


Tabular Integration

Repeated integration by parts, also called the DI method of Hindu method


Considering a second derivative of \(v\) in the integral on the LHS of the formula for partial integration suggests a repeated application to the integral on the RHS:

Extending this concept of repeated partial integration to derivatives of degree \(n\) leads to


Step   Procedure
Step 1 In the product comprising the function \(f\), identify the polynomial and denote it \(F(x)\). Denote the other function in the product by \(G(x)\).
Step 2 Create a table of \(F(x)\) and \(G(x)\), and successively differentiate F(x) until you reach \(0\). Successively integrate \(G(x)\) the same amount of times.
Step 3 Negate every second entry under \(F(x)\).
Step 4 Construct the integral by taking the product of \(F(x)\) and the first integral of \(G(x)\), then add the product of \(F′(x)\) times the second integral of \(G(x)\), then add the product of \(F′′(x)\) times the third integral of \(G(x)\), etc…


LIATE rule

Whichever function comes first in the following list should be \(u\):

Abbreviation FunctionType Examples
L Logatithmic functions \(\ln(x),\;\log_2(x),\;etc\)
I Inverse trigonometric functions \(\tan^{-1}(x),\;\sin^{-1}(x),\;etc\)
A Algebraic functions \(x,\;3x^3,\;5x^{21},\;etc\)
T Trigonometric functions \(\cos(x),\;\tan(x),\;sech(x),\;etc\)
E Exponential functions \(e^x,\;7^x,\;etc\)


Partial Integration Applications

Antiderivatives
Polynomials and trigonometric functions
Exponentials and trigonometric functions
Gamma function identity

Green's first identity

https://en.wikipedia.org/wiki/Integration_by_parts

Integration by parts can be extended to functions of several variables by applying a version of the fundamental theorem of calculus to an appropriate product rule. There are several such pairings possible in multivariate calculus, involving a scalar-valued function u and vector-valued function (vector field) V.

Integration Formulas

Trigomometry



Difference Between a Partial Derivative and Total Derivative

\(\frac{\operatorname dz}{\operatorname dx}=\frac{\partial z}{\partial x}+\frac{\partial z}{\partial y}\frac{\partial y}{\partial x}\)

A partial derivative assumes all other variables are constant.

As an example, take \(f(x,\;y)=x^2y+2y\)

The derivative of \(f\) with respect to \(x\) is

\(\frac d{dx}(x^2y+2y)=\frac d{dx}x^2y+\frac d{dx}2y=2xy+x^2\frac{dy}{dx}+2\frac{dy}{dx}\)

The partial derivative of \(f\) with respect to \(x\) is \(\frac\partial{\partial x}(x^2y+2y)=\frac\partial{\partial x}x^2y+\frac\partial{\partial x}2y=2xy\)

Note that the partial derivative can be obtained from the normal derivative by substituting \(\frac{dy}{dx}=0\), since that is how constants behave under differentiation.

Left:Tangent line slope of the surface in the x-direction

Right:Tangent line slope of the surface in the y-direction


another example,

\(f(x,y) = \sin(x)+3y^2\)

partial derivative with respect to \(x\), you should write:

\(\frac{\partial f(x,y)}{\partial x} = \cos(x)+0\)

Since \(y\) is effectively a constant with respect to \(x\). In other words, substituting a value for \(y\) has no effect on \(x\). However, if I asked you for the total derivative with respect to \(x\), you should write:

\(\frac{df(x,y)}{dx}=\cos(x)\cdot {dx\over dx} + 6y\cdot {dy\over dx}\)

You wouldn't write \(dx\over dx\) in practice since it's just 1.


Differentiation formulas

Trigomometry






Hypercomplex number

In mathematics, hypercomplex number is a traditional term for an element of a finite-dimensional unital algebra over the field of real numbers. The study of hypercomplex numbers in the late 19th century forms the basis of modern group representation theory.

Hyperreal number

In mathematics, the system of hyperreal numbers is a way of treating infinite and infinitesimal (infinitely small but non-zero) quantities.

Schrodinger Equation

Padé approximant

Integral vs Antiderivative

Differential vs Derivative

Integral vs Antiderivative

Let a function \(f(x)\) be given. An antiderivative of \(f(x)\) is a function \(F(x)\) such that \(F′(x)=f(x)\).

The set of all antiderivatives of \(f(x)\) is the indefinite integral of \(f\), denoted by \(\int {f(x)}dx\)

Ex

f(x) = x²

The antiderivative of x² is F(x) = ⅓ x³.

The indefinite integral is ∫ x² dx = F(x) = ⅓ x³ + C, which is almost the antiderivative except c. (where “C” is a constant number.)

On the other hand, we learned about the Fundamental Theorem of Calculus couple weeks ago, where we need to apply the second part of this theorem in to a “definite integral”.

The definite integral, however, is ∫ x² dx from a to b = F(b) – F(a) = ⅓ (b³ – a³).


Differential vs Derivative

In mathematics, the rate of change of one variable with respect to another variable is called a derivative and the equations which express relationship between these variables and their derivatives are called differential equations.

Equations which define relationship between these variables and their derivatives are called differential equations. Differentiation is the process of finding a derivative. The derivative of a function is the rate of change of the output value with respect to its input value, whereas differential is the actual change of function.


Differential Power Rule

For positive integers \(n\), we can use the Binomial Theorem. Let \(f(x)=x^n\). We want to find the slope of the tangent line to \(y=f(x)\) at \(x=a\). So take a very small \(h\), and calculate

\[\frac{(a+h)^n-a^n}{h}.\]

By the Binomial Theorem, \((a+h)^n=a^n+na^{n-1}h +\binom{n}{2}a^{n-2}h^2+\cdots\)

Since \(h\) is tiny, \(h^2\), \(h^3\), and so on are negligible compared to \(h\). Thus

\[\frac{(a+h)^n-a^n}{h}\approx na^{n-1}.\]


Cubic formula

For a cubic equation \(ax^3+bx^2+cx+d=0\)




Convolution and Correlation

Convolution

Convolution of two causal sequences is causal.
Convolution of two anti causal sequences is anti causal.
Convolution of two unequal length rectangles results a trapezium.
Convolution of two equal length rectangles results a triangle.
A function convoluted itself is equal to integration of that function.

Sine Rule

proof:

Acute triangles

\(\sin\;B=\frac hc\;and\;\sin\;C=\frac hb\)

\(h\;=\;c\;\sin\;B\;and\;h\;=\;b\;\sin\;C\)

\(c\;\sin\;B\;=\;b\;\sin\;C\)

\(\frac c{\sin\;C}\;=\;\frac b{\sin\;B}\;\)

Using a similar method it can be shown that in this case

\(\frac c{\sin\;C}\;=\;\frac a{\sin\;A}\;\)

Combining them together

\(\frac a{\sin\;A}\;=\;\frac b{\sin\;B}=\;\frac c{\sin\;C}\)

Obtuse Triangles

\(\sin\;B=\frac hc\;and\;\sin\;C=\frac hb\)

\(h\;=\;c\;\sin\;B\;and\;h\;=\;b\;\sin\;C\)

\(c\;\sin\;B\;=\;b\;\sin\;C\)

\(\frac c{\sin\;C}\;=\;\frac b{\sin\;B}\;\)

The angles BAC and BAK are supplementary, so the sine of both are the same.

Angle A is BAC, so \(\sin\;A\;=\;\frac hc\)

\(h\;=\;c\;\sin\;A\)

In the larger triangle CBK \(\sin\;C\;=\;\frac ha\)

\(h\;=\;a\;\sin\;C\)

\(c\;\sin\;A\;=\;a\;\sin\;C\)

\(\frac a{\sin\;A}\;=\;\frac c{\sin\;C}\)

Combining them together

\(\frac a{\sin\;A}\;=\;\frac b{\sin\;B}=\;\frac c{\sin\;C}\)

\(\frac{\sin\;A}a\;=\;\frac{\sin\;B}b=\;\frac{\sin\;C}c\)


Cosine Rule

proof:

In the right triangle BCD, from the definition of cosine:

\(\cos\;C\;=\;\frac{CD}a\)

\(CD\;=\;a\;\cos\;C\)

Subtracting this from the side b, we see that

\(DA\;=\;b\;-\;a\;\cos\;C\)

In the triangle BCD, from the definition of sine:

\(\sin\;C\;=\;\frac ha\;=\;\frac{BD}a\)

\(BD\;=\;a\;\sin\;C\)

In the triangle ADB, applying the Pythagorean Theorem

\(c^2\;=\;\overline{BD}^2\;+\;\overline{DA}^2\)

Substituting for BD and DA

\(c^2\;={(a\;\sin\;C)}^2\;+\;{(b\;-\;a\;\cos\;C)}^2\)

Multiplying out the parentheses

\(c^2\;=a^2\sin^2C\;+\;b^2\;-\;2ab\;\cos\;C\;+\;a^2\cos^2C\)

Rearranging the terms

\(c^2\;=a^2\sin^2C\;+\;a^2\cos^2C\;+\;b^2\;-\;2ab\;\cos\;C\;\)

Factoring out \(a^2\)

\(c^2\;=a^2(\sin^2C\;+\;\cos^2C)\;+\;b^2\;-\;2ab\;\cos\;C\;\)

Looking at the terms in the parentheses above, recall that this is one of the trig identities, which states that

\(\sin^2\theta\;+\;\cos^2\theta\;=\;1\)

So the terms in the parentheses can be removed since mutiplying \(a^2\) by one leaves it unchanged

\(c^2\;=a^2\;+\;b^2\;-\;2ab\;\cos\;C\;\)


Tangent and Intersected Chord Theorem


\(m\angle1=\frac12m\widehat{AC}\;and\;m\angle2=\frac12m\widehat{ADC}\)



Circle Theorems


the Kepler's third law of planetary motion: "The square of the orbital period of a planet is directly proportional to the cube of the semi-major axis of its orbit." The below image shows the semi-major and semi-minor axis of an orbit. The planet, earth in this case, is located at one of focii of the orbit.

For sake of explanation, let us assume for the moment that the orbit is in same plane as that of earth's equator. A Geo-synchronous orbit, which has an orbital time period of 24 hours (23:56:04 to be exact) , same as the earth's rotational time period. That means the satellite will point to the same location on earth every 24hrs. As per the Kepler's third law of planetary motion, the semi-major axis for an orbit with 24 hr time period comes out to be: 42,164 km. However, the Kepler's law doesn't specify the eccentricity of the orbit. If you are not aware what eccentricity means, it is just a mathematical equivalent of aspect ratio of a ellipse.

So, the law allows an elliptical orbit of any eccentricity (or aspect ratio) to have 24 hr time period as long as the size of the orbit is maintained at 42,164 km. An orbit with a zero eccentricity (e=0) is a circular orbit (i.e. aspect ratio of 1), and in this case, it forms the Geostationary orbit, meaning the satellite will remain above the same location on earth at any point of time, and will appear stationary. Now, lets introduce a slight eccentricity, the satellite speed varies depending upon whether the satellite is in perigee or apogee. Perigee (or Perihelion) and Apogee (or Aphelion) are the nearest and farthest points from the earth to the elliptical orbit.

The satellite will slow down at the farthest points (Apogee) and move faster at Perigee. This phenomenon will appear from earth as if the satellite is sometimes leading and at other times lagging the rotation of earth, i.e. the satellite will appear to oscillate in the sky, moving back and forth (west-east-west), but the speed of moving east and moving west will not be equal. Now, let us break our assumption that the orbit lies in the equatorial plane. Assume an orbit in a plane at an angle to the equatorial plane, as shown in the figure:


A satellite orbiting in this plane will appear to move above and below (north-south) the equator. Imagine a combination of the two movements I just described. The east-west oscillation and north-south oscillation combined. You, get an 8 shape. Viola!



Complexity

We can expand the phone book example to compare other kinds of operations and their running time. We will assume our phone book has businesses (the "Yellow Pages") which have unique names and people (the "White Pages") which may not have unique names. A phone number is assigned to at most one person or business. We will also assume that it takes constant time to flip to a specific page.

Here are the running times of some operations we might perform on the phone book, from fastest to slowest:

  • O(1) (in the worst case): Given the page that a business's name is on and the business name, find the phone number.

  • O(1) (in the average case): Given the page that a person's name is on and their name, find the phone number.

  • O(log n): Given a person's name, find the phone number by picking a random point about halfway through the part of the book you haven't searched yet, then checking to see whether the person's name is at that point. Then repeat the process about halfway through the part of the book where the person's name lies. (This is a binary search for a person's name.)

  • O(n): Find all people whose phone numbers contain the digit "5".

  • O(n): Given a phone number, find the person or business with that number.

  • O(n log n): There was a mix-up at the printer's office, and our phone book had all its pages inserted in a random order. Fix the ordering so that it's correct by looking at the first name on each page and then putting that page in the appropriate spot in a new, empty phone book.

For the below examples, we're now at the printer's office. Phone books are waiting to be mailed to each resident or business, and there's a sticker on each phone book identifying where it should be mailed to. Every person or business gets one phone book.

  • O(n log n): We want to personalize the phone book, so we're going to find each person or business's name in their designated copy, then circle their name in the book and write a short thank-you note for their patronage.

  • O(n2): A mistake occurred at the office, and every entry in each of the phone books has an extra "0" at the end of the phone number. Take some white-out and remove each zero.

  • O(n · n!): We're ready to load the phonebooks onto the shipping dock. Unfortunately, the robot that was supposed to load the books has gone haywire: it's putting the books onto the truck in a random order! Even worse, it loads all the books onto the truck, then checks to see if they're in the right order, and if not, it unloads them and starts over. (This is the dreaded bogo sort.)

  • O(nn): You fix the robot so that it's loading things correctly. The next day, one of your co-workers plays a prank on you and wires the loading dock robot to the automated printing systems. Every time the robot goes to load an original book, the factory printer makes a duplicate run of all the phonebooks! Fortunately, the robot's bug-detection systems are sophisticated enough that the robot doesn't try printing even more copies when it encounters a duplicate book for loading, but it still has to load every original and duplicate book that's been printed.


Resource Determinism Complexity class Resource constraint
Space Non-Deterministic NSPACE(f(n)) O(f(n))
NL O(log n)
NPSPACE O(poly(n))
NEXPSPACE O(2poly(n))
Deterministic DSPACE(f(n)) O(f(n))
L O(log n)
PSPACE O(poly(n))
EXPSPACE O(2poly(n))
Time Non-Deterministic NTIME(f(n)) O(f(n))
NP O(poly(n))
NEXPTIME O(2poly(n))
Deterministic DTIME(f(n)) O(f(n))
P O(poly(n))
EXPTIME O(2poly(n))


\(O(log\;n)\) that means that the height of the tree is proportional to the \(log\) of \(n\). Since this is a binary tree, you'd use \(log\) base \(2\).


Quick sort


Inverse of a 2×2 Matrix

The inverse of a 2×2 matrix can be found with a formula.

\(A = \left[\begin{matrix} a & b \\ c & d \end{matrix}\right]\)

\(A^{-1} = \frac{1}{ad-bc} \left[\begin{matrix} d & -b \\ -c & a \end{matrix}\right]\)

The Cayley–Hamilton method gives

\(A^{-1}=\frac1{det\;A}\lbrack(tr\;A)I-A\rbrack\)


Resolvent formalism


Fractional Calculus

Fractional Integration
Cauchy formula for repeated integration

\[f^{(-n)}(x)=\frac1{(n-1)!}\int_a^x(x-t{)^{n-1}f(t)}\;dt\]

leads in a straightforward way to a generalization for real n.

\[I^\alpha f(t)=\frac1{\Gamma(\alpha)}\int_a^xf(t)(x-t)^{\alpha-1}\;dt\]

\(f\) is a locally integrable function on \([a,x]\), and \(α\) is a complex number in the half-plane Re(α) > 0

Using the Gamma function to remove the discrete nature of the factorial function gives us a natural candidate for fractional applications of the integral operator.

Gamma function (represented by \(Γ\), the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers.

\(\Gamma(n)=(n-1)!\) Gamma function is not defined for negative integers and 0.

\(\Gamma(\alpha)=\int_0^\infty x^{\alpha-1}e^{-x}dx,\;\alpha>0\)

\(=-\int_0^\infty x^{\alpha-1}de^{-x}\)

\(=\left|-x^{\alpha-1}e^{-x}\right|_0^\infty+\int_0^\infty e^{-x}dx^{\alpha-1}\)

\(=\int_0^\infty e^{-x}dx^{\alpha-1}=\int_0^\infty e^{-x}(\alpha-1)x^{\alpha-2}dx\)

\(=(\alpha-1)\int_0^\infty x^{\alpha-2}e^{-x}dx\)

\(\Gamma(\alpha)=(\alpha-1)\Gamma(\alpha-1)\)

\(\Gamma(1)=\int_0^\infty t^{1-1}e^{-t}dt\)

\(=\int_0^\infty e^{-t}dt\)

\(=1\)

\(\Gamma(2)=1\Gamma(1)=1\)

\(\Gamma(3)=2\Gamma(2)=2\)

\(\Gamma(4)=3\Gamma(3)=3\times2\)

\(\Gamma(n)=(n-1)!\)


\(\Gamma(\frac12)=\int_0^\infty t^{\frac12-1}e^{-t}dt=\int_0^\infty t^{-\frac12}e^{-t}dt\)

\(let\;t=u^2,\;dt=2udu\)

\(=\int_0^\infty u^{-1}e^{-u^2}2udu=2\int_0^\infty e^{-u^2}du\)

\(let\;I=\int_0^\infty e^{-x^2}dx\)

\(I^2=\int_0^\infty\int_0^\infty e^{-x^2}e^{-y^2}dxdy\)

\(I^2=\int_0^\frac{\mathrm\pi}2\int_0^\infty e^{-r^2}rdrd\theta=\left|-\frac12e^{-r^2}\right|_0^\infty\cdot\frac{\mathrm\pi}2\)

\(I^2=\frac{\mathrm\pi}4,\;I=\frac{\sqrt{\mathrm\pi}}2\)

\(\Gamma(\frac12)=\;2\cdot I=\sqrt{\mathrm\pi}\)


Riemann–Liouville fractional integral

The Riemann–Liouville integral exists in two forms, upper and lower. Considering the interval \([a,b]\), the integrals are defined as

\({}_a{D_t^{-\alpha}f(t)}={}_a{I_t^\alpha f(t)}\)

\(=\frac1{\Gamma(\alpha)}\int_a^t{(t-\tau)}^{\alpha-1}f(\tau)d\tau\)

\({}_t{D_b^{-\alpha}f(t)}={}_t{I_b^\alpha f(t)}\)

\(=\frac1{\Gamma(\alpha)}\int_t^b{(\tau-t)}^{\alpha-1}f(\tau)d\tau\)


differentiation operator \(D\)    \(Df(x)=\frac d{dx}f(x)\)

integration operator \(J\)    \(Jf(x)=\int_0^xf(s)ds\)

\((J^nf)(x)=\frac1{(n-1)!}\int_0^x{(x-t)}^{n-1}f(t)dt\)

\((J^\alpha)(J^\beta)f=(J^\beta)(J^\alpha)f=(J^{\alpha+\beta})f\)

\(=\frac1{\Gamma(\alpha+\beta)}\int_0^x{(x-t)}^{\alpha+\beta-1}f(t)dt\)


Half integral of x

We can write the half integral of x as,

\[\int_\frac12f(x){(dx)}^\frac12\]

\(\int_\frac12x{(dx)}^\frac12\)

\(\because\frac{d^n}{dx^n}x^{2n}=\frac{2(2n-1)!}{(n-1)!}x^n\)

\(\frac{d^\frac12}{dx^\frac12}x=\frac2{\sqrt{\mathrm\pi}}\sqrt x\)

This is the half derivative of x, i.e. the inverse of the half integral of x.

\({(\frac{2(2n-1)!}{(n-1)!})}^{-1}=\frac{(n-1)!}{2(2n-1)!}\)

\(\int xdx=\frac12x^2+C\)

\(\iint xdx=\frac16x^3+C\)

\(\therefore\int_nx^{2n}{(dx)}^n=\frac{(n-1)!}{2(2n-1)!}x^{n+1}+C\)

\(\int_{\textstyle\frac12}x{(dx)}^{\textstyle\frac12}=\frac{(-\frac12)!}2x^{\textstyle\frac32}+C\)

\(\Gamma({\textstyle\frac12})=(-{\textstyle\frac12})!=\sqrt{\mathrm\pi}\)

\(\int_{\textstyle\frac12}x{(dx)}^{\textstyle\frac12}=\frac{\sqrt{\mathrm\pi}}2x^{\textstyle\frac32}+C=\frac{\sqrt{\mathrm\pi}}2x\sqrt x+C\)


Fractional Derivative

\((J^\alpha f)(x)=\frac1{\Gamma(\alpha)}\int_0^x{(x-t)}^{\alpha-1}f(t)dt\)

\(when\;0<\alpha<1\)

\(D^\alpha f(x)=\frac1{\Gamma(1-\alpha)}\frac d{dx}\int_0^x\frac{f(t)}{{(x-t)}^\alpha}dt\)

Left Riemann–Liouville Fractional Derivative

\(D_a^nf(x)=\frac1{\Gamma(\left\lceil n\right\rceil-n)}\frac d{dx^{\left\lceil n\right\rceil}}\int_a^x{(x-t)}^{\left\lceil n\right\rceil-n-1}f(t)dt\)


Similar to the definitions for the Riemann–Liouville integral, the derivative has upper and lower variants

\({}_a{D_t^\alpha f(t)}=\frac{d^n}{dt^n}{}_a{D_t^{-(n-\alpha)}f(t)}\)

\(=\frac{d^n}{dt^n}{}_a{I_t^{n-\alpha}f(t)}\)

\({}_t{D_b^\alpha f(t)}=\frac{d^n}{dt^n}{}_t{D_b^{-(n-\alpha)}f(t)}\)

\(=\frac{d^n}{dt^n}{}_t{I_b^{n-\alpha}f(t)}\)


The Riemann-Liouville Integral of order \(\alpha,\;0<\alpha<1,\) is represented as

\(I_{0^+}^\alpha f(t)=\frac1{\Gamma(\alpha)}\int_0^t\frac{f(\tau)}{{(t-\tau)}^{1-\alpha}}d\tau\)

Correspondingly, The Riemann-Liouville Derivative of order \(\alpha\) is defined by

\(D_{0^+}^\alpha f(t)=\frac1{\Gamma(1-\alpha)}\frac d{dt}\int_0^t\frac{f(\tau)}{{(t-\tau)}^\alpha}d\tau\)

These definitions are valid for \(0 < t < T\), where \(T > 0\). It can be shown that \(D_{0^+}^\alpha f(t)=\frac d{dt}I_{0^+}^{1-\alpha}f(t)\)


Semi-Integral

A fractional integral of order \(\frac12\). The semi-integral of \(t^\lambda\) is given by

\(D^{-{\textstyle\frac12}}t^\lambda=\frac{t^{\lambda+\frac12}\Gamma(\lambda+1)}{\Gamma(\lambda+\frac32)}\)

so the semi-integral of the constant function \(f(t)=c\) is given by

\(D^{-{\textstyle\frac12}}c=c\;\lim_{\lambda\rightarrow0}\frac{t^{\lambda+\frac12}\Gamma(\lambda+1)}{\Gamma(\lambda+\frac32)}=2c\sqrt{\frac t{\mathrm\pi}}\)


BETA function

In mathematics, the beta function, also called the Euler integral of the first kind, is a special function that is closely related to the gamma function and to binomial coefficients. It is defined by the integral

\[B(z_1,z_2)=\int_0^1t^{z_1-1}{(1-t)}^{z_2-1}dt\]

for complex number inputs \(z_1,z_2\) such that \(\mathfrak R(z_1),\;\mathfrak R(z_2)\;>\;0\).

A key property of the beta function is its close relationship to the gamma function:

\(B(z_1,z_2)=\frac{\Gamma(z_1)\Gamma(z_2)}{\Gamma(z_1+z_2)}\)

The beta function is also closely related to binomial coefficients. When \(m\) (or \(n\), by symmetry) is a positive integer,

\(B(m,n)=\frac{(m-1)!(n-1)!}{(m+n-1)!}\)


Fractional Fourier Transforms

\(ℱ\{\frac{d^a}{dx^a}f(x)\}={(j\omega)}^af(\omega)\)

\(ℱ^{-1}\{ℱ\{\frac{d^a}{dx^a}f(x)\}\}=ℱ^{-1}\{{(j\omega)}^af(\omega)\}\)

\(\frac{d^a}{dx^a}f(x)=ℱ^{-1}\{{(j\omega)}^af(\omega)\}\)

\(\)

\(\)

\(\)

\(\)

\(\)


Gamma distribution

\[\]

\[\]

\[\]

\[\]

Orthogonal vs Orthnormal

centripetal acceleration

\[\frac{v^2}{r} = \omega^2 r = v \omega\]

Orbital Velocity

Imagine you are going in a circle of radius \(r\) starting at three o'clock and heading towards two o'clock. If it takes you time \(T\) then your speed is \(v=2πr/T\).

Now if you look at your velocity it starts out going upwards then ends up going left, then down, then up again. Its like your velocity vector is on a circle of "radius" \(v\) but starting at 12 o'clock (corresponding to velocity point straight up) and it also makes a full circle in time \(T\) because the velocity isn't pointing straight up again until the position is at 3 o'clock again.

So we can compute the acceleration the same way we computed speed \(a=2πv/T\).

So we have two equations and they have a \(T\) we didn't want in our final answer, so solve the first equation, \(v=2\pi r/T\), for \(T\) to get \(T=2\pi r/v\). Then plug that into the second equation \(a=2\pi v/T\) to get \(a=2\pi v/(2\pi r/v)=v^2/r\).

\(\frac{\left|\triangle\overrightarrow v\right|}v=\frac{\left|\triangle\overrightarrow r\right|}r\)

\(\left|\triangle\overrightarrow v\right|=\frac vr\left|\triangle\overrightarrow r\right|\)

\[a_c=\lim_{\triangle t\rightarrow0}\frac{\left|\triangle\overrightarrow v\right|}{\triangle t}=\frac vr\lim_{\triangle t\rightarrow0}\frac{\left|\triangle\overrightarrow r\right|}{\triangle t}=\omega\lim_{\triangle t\rightarrow0}\frac{\left|\triangle\overrightarrow r\right|}{\triangle t}=v\omega=\frac{v^2}r\]

\(F_c=ma\)

\(a=\frac{v^2}r\)

\(F_c=\frac{m_2v^2}r\)

\(F_G=G\frac{M_1\cdot m_2}{r^2}\)

\(F_c=F_G\)

\(\frac{m_2v^2}r=G\frac{M_1\cdot m_2}{r^2}\)

\(v^2=\frac{G\cdot M_1}r\)

\(v=\sqrt{\frac{G\cdot M_1}r}\)

Orbital Period

\(v=\frac dt=\frac{2\mathrm{πr}}T\)

\(v^2=\frac{G\cdot M_1}r\)

\(\frac{4\mathrm\pi^2\mathrm r^2}{T^2}=\frac{G\cdot M_1}r\)

\(T^2=\frac{4\mathrm\pi^2\mathrm r^3}{G\cdot M_1}\)

\(T=\sqrt{\frac{4\mathrm\pi^2\mathrm r^3}{\mathrm G\cdot{\mathrm M}_1}}=2\mathrm{πr}\sqrt{\frac r{\mathrm G\cdot{\mathrm M}_1}}\)

Derivative of ln(x)

why derivative of ln(x) equal to 1/x?

Sol 1:

\(\frac d{dx}(x)=1\)

\(e^{\ln x}=x\)

\(\frac d{dx}(e^{\ln x})=1\)


Chain Rule

\(\frac d{dx}e^u=e^u\frac{du}{dx}\)


\(e^{\ln x}\frac d{dx}(\ln x)=1\)

\(x\frac d{dx}(\ln x)=1\)

\(\frac d{dx}(\ln x)=\frac1x\)


Sol 2:

\(e=\lim_{n\rightarrow\infty}{(1+\frac1n)}^n\)

\(\frac{\operatorname df}{\operatorname dx}=\lim_{h\rightarrow0}\frac{f(x+h)-f(x)}h\)

\(\frac d{dx}\ln(x)=\lim_{h\rightarrow0}\frac{\ln(x+h)-\ln(x)}h\)

\(=\lim_{h\rightarrow0}\frac{\ln({\frac{x+h}x})}h\)

\(=\lim_{h\rightarrow0}\frac1h\ln(1+\frac hx)\)

\(=\lim_{h\rightarrow0}\ln{(1+\frac hx)}^\frac1h\)

\(let\;m=\frac hx\Rightarrow h=mx,\;(h\rightarrow0,\;m\rightarrow0)\)

\(\frac d{dx}\ln(x)=\lim_{m\rightarrow0}\ln{(1+m)}^{\frac1m\cdot\frac1x}\)

\(=\frac1x\lim_{m\rightarrow0}\ln{(1+m)}^\frac1m\)

\(let\;n=\frac1m,\;(m\rightarrow0,\;n\rightarrow\infty)\)

\(\frac d{dx}\ln(x)=\frac1x\lim_{n\rightarrow\infty}\ln\lbrack{(1+\frac1n)}^n\rbrack=\frac1x\ln\lbrack\lim_{n\rightarrow\infty}{(1+\frac1n)}^n\rbrack\)

\(=\frac1x\ln(e)\)

\(=\frac1x\)