biology-and-math
curiosamathematica:

The Rado graph
The Rado graph is the unique (up to isomorphism) countable graph R such that for every finite graph G and every vertex v of G, every embedding of G−v as an induced subgraph of R can be extended to an embedding of G into R. This implies R contains all finite and countable graphs as induced subgraphs.
Rado gave the following construction: identifiy the vertices of the graph with the natural numbers. For every x and y with x<y, an edge connects vertices x and y in the graph if the xth bit of y's binary representation is nonzero.
Thus, for instance, the neighbors of vertex 0 consist of all odd-numbered vertices, while the neighbors of vertex 1 consist of vertex 0 (the only vertex whose bit in the binary representation of 1 is nonzero) and all vertices with numbers congruent to 2 or 3 modulo 4.

curiosamathematica:

The Rado graph

The Rado graph is the unique (up to isomorphism) countable graph R such that for every finite graph G and every vertex v of G, every embedding of G−v as an induced subgraph of R can be extended to an embedding of G into R. This implies R contains all finite and countable graphs as induced subgraphs.

Rado gave the following construction: identifiy the vertices of the graph with the natural numbers. For every x and y with x<y, an edge connects vertices x and y in the graph if the xth bit of y's binary representation is nonzero.

Thus, for instance, the neighbors of vertex 0 consist of all odd-numbered vertices, while the neighbors of vertex 1 consist of vertex 0 (the only vertex whose bit in the binary representation of 1 is nonzero) and all vertices with numbers congruent to 2 or 3 modulo 4.

nobel-mathematician

It’s called the principal of maximum confusion. Mathematicians like to name things such that they provide the most confusion to non-mathematicians.

If anyone asks you to explain something you don’t want to explain, just cite the principal of maximum confusion and it’s lemma “Now, shut up.”

David C. Kelly, complex analysis professor (via mathprofessorquotes)
nobel-mathematician
So, I think the hardest thing I learned in mathematics was perseverance and patience, and this is the nature of mathematics. Math is very binary. It’s usually nothing, nothing, nothing, nothing, nothing… and then everything, you’ve got it. It’s also very humbling because, once you’ve got it, you realize it looks so obvious. So, you’ve got some humiliating experience like, ‘Oh, why didn’t I get that in the first place?’ But that’s what I learned, that’s the nature of mathematics.
biology-and-math

spring-of-mathematics:

Infinity …    
      … it’s not big …
      … it’s not huge …
      … it’s not tremendously large …
      … it’s not extremely humongously enormous …
      … it’s

       …Endless!

Infinity has no end. Infinity is the idea of something that has no end.

"Paul Erdős lived in Budapest, Hungary, with his Mama. Mama loved Paul to infinity ∞. When Paul was 3. She had to go back to work as a math teacher….” (Extract from the book: The Boy Who Loved Math: The Improbable Life of Paul Erdős by Deborah Heiligman - Figure 1).

Infinity, most often denoted as infty(symbol:∞), is an unbounded quantity that is greater than every real number, is an abstract concept describing something without any limit and is relevant in a number of fields, predominantly mathematics and physics. In number systems incorporating infinitesimals, the reciprocal of an infinitesimal is an infinite number, i.e., a number greater than any real number. Infinity is a very tricky concept to work with, as evidenced by some of the counterintuitive results that follow from Georg Cantor’s treatment of infinite sets.
Georg Cantor formalized many ideas related to infinity and infinite sets during the late 19th and early 20th centuries. In the theory he developed, there are infinite sets of different sizes (called cardinalities). For example, the set of integers is countably infinite, while the infinite set of real numbers is uncountable. (Here is one of Proofs)

  • In Geometry and topology: Main article: Dimension (vector space). Infinite-dimensional spaces are widely used in geometry and topology, particularly as classifying spaces, notably Eilenberg−MacLane spaces. Common examples are the infinite-dimensional complex projective space K(Z,2) and the infinite-dimensional real projective space K(Z/2Z,1).
  • In Fractal Geometry: The structure of a fractal object is reiterated in its magnifications. Fractals can be magnified indefinitely without losing their structure and becoming “smooth”; they have infinite perimeters—some with infinite, and others with finite surface areas. One such fractal curve with an infinite perimeter and finite surface area is the Koch snowflake.
  • In Real analysis: In real analysis, the symbol \infty, called “infinity”, is used to denote an unbounded limit. x -> ∞ means that x grows without bound, and x  -> - ∞ means the value of x is decreasing without bound.

See more at: Infinity on Wikipedia and Mathworld - What is Infinity? on MathisFun.

Reference:  Paul Erdös and the Erdös Number Project page.

Image: Koch snowflakes & The Boy Who Loved Math: The Improbable Life of Paul Erdős.

spatialtopiary

twocubes:

twocubes:

bonus points if you can come up with some visualizations~

Actually, you know what? I’m going to explain this, because it’s a good example of a situation where symbolic reasoning is way more powerful than visual reasoning. So here goes:

The basic principle at work is that any series (an)n=1 can be written as the sum of the differences between its terms, so an = ∑an-an-1 (supposing by convention that a0=0)

In this particular case, we choose for an the sequence defined on the left of each of these. Noting that ∑nk=1 k = n(n+1)/2, we then have, for each exponent m, that (∑nk=1k)m = ∑nk=1 (k(k+1)/2)m - ((k-1)k/2)m.

The terms in the sum on the right, then, are just (k(k+1)/2)m - ((k-1)k/2)m, with m going from 2 to 7, but written in a more appealing way.

Thus, we have, in about three lines, a general formula for all of these, and the basic principle involved generalizes further, and can give you similar formulas for many many other sequences. Which is nice. Contrast this with either of the visualizations of the first of these, which are kind of hard to think up, and kind of hard to generalize from too. (Although, they are pretty, and I’m proud of having come up with one of them.)

—————

So what’s the point? This is actually a response to a number of reblogs of the visualization posts saying that visualizing things makes math easier. The point here is that, in this case, it’s much easier to do things symbolically than visually, and that finding a visualization is actually a much harder exercise here. Now, to be clear, this emphatically isn’t always the case; there are many places in mathematics where visual approaches are very often preferred, and that’s important too. But I would argue that, generally, it is desirable in mathematics to be able to jump back and forth from a visual approach to a symbolic one.

So, yeah, the visualizations are pretty, and thinking about them is fun, but, I just don’t want the visual fun we’re having to be at the expense of symbolic thinking. Symbolic and visual thinking work best together, and I would hope they would be friends to you too.

The fact that for the next month or so I don’t have any lessons at all every other monday is the biggest motivator for my procrastination right now. 

And by procrastinate of course I mean hell loads of maths practice and maths team preparation and things. 

Anyone else on their school’s senior maths team in the manchester-ish area? If so I might see you there :) 

If we’re lucky then we might make it to the nationals so does anyone think they’ll make it through and see us there? ;)


spatialtopiary

ferr0uswheel:

ryanandmath:

Imagine you wanted to measure the coastline of Great Britain. You might remember from calculus that straight lines can make a pretty good approximation of curves, so you decide that you’re going to estimate the length of the coast using straight lines of the length of 100km (not a very good estimate, but it’s a start). You finish, and you come up with a total costal length of 2800km. And you’re pretty happy. Now, you have a friend who also for some reason wants to measure the length of the coast of Great Britain. And she goes out and measures, but this time using straight lines of the length 50km and comes up with a total costal length of 3400km. Hold up! How can she have gotten such a dramatically different number?

It turns out that due to the fractal-like nature of the coast of Great Britain, the smaller the measurement that is used, the larger the coastline length will be become. Empirically, if we started to make the measurements smaller and smaller, the coastal length will increase without limit. This is a problem! And this problem is known as the coastline paradox.

By how fractals are defined, straight lines actually do not provide as much information about them as they do with other “nicer” curves. What is interesting though is that while the length of the curve may be impossible to measure, the area it encloses does converge to some value, as demonstrated by the Sierpinski curve, pictured above. For this reason, while it is a difficult reason to talk about how long the coastline of a country may be, it is still possible to get a good estimate of the total land mass that the country occupies. This phenomena was studied in detail by Benoit Mandelbrot in his paper “How Long is the Coast of Britain" and motivated many of connections between nature and fractals in his later work.

that’s an interesting paradox. fractals solve every problem.