I created this simple shader that attempts to draw geometry contours using ray-tracing. The idea is to send extra rays, parallel to the camera ray, and translate according to the normal of the intersected surface. If the parallel ray hits the same geometry, it's not a contour.
That somehow seems to work, but that still needs a bit of refinement.
Phong and blinn shading. The normal is defined by a vector going from the voxel position to the barycenter of the surrounding voxels, weighted by opacity. I got the idea in the middle of the night, thanks to a strong winter storm that prevented me from getting a good rest, but that seems to work ok :-)
Let's face reality, 3D is dead, long live nD. Now that the Sol-R ray-tracer is more or less complete (for what it was initialy designed for anyway), it's time to move on to the next level. I added the Hypercube scene to the Sol-R viewer so that we can now play around with n dimensional hypercubes, thanks to the Delphi code, by cs_Forman (http://codes-sources.commentcamarche.net/source/33735-hypercubes). Now that it's pretty clear that our world has more than 3 dimensions, I am hoping that Sol-R can now help understanding what n dimensional geometry looks like when it intersects our 3D world. The code is there:
In this silent video from the Blue Brain Project at SC16, 865 segments from a rodent brain are simulated with isosurfaces generated from Allen Brain Atlas image stacks. The work is derived from the INCITE program’s project entitled: Biophysical Principles of Functional Synaptic Plasticity in the Neocortex.
I produced 2 sequences of that video using Brayns, the application I designed in the context of the Blue Brain Project.
Last tuesday, I presented how Brayns could be used to render high quality images for our colleagues from the neuro-robotics team. Brayns is hardware agnostic and it takes no more than one command line argument to switch between OSPRay (CPU) and OptiX (GPU) backends. The following video shows Brayns in action, on a 24MPixel display wall! Brayns is running on 1 machine powered by 2 Quadro K5000 NVIDIA GPUs.
Input file is a binary array of floats: x, y, z, radius and value of elements. Each voxel of the final volume contains the sum of all elements with a weight that correspond to the value of the element divided by its squared distance to the voxel. Note that in the final volume, values are normalized.
This is currently a brute force implementation that produces accurate 8bit volumes.
The <output_file> is suffixed by the size of the volume.
SIMDVoxelizer makes use of the Intel ISPC compiler and requires ispc to be in the PATH.
To build SIMDVoxelizer, simply run make in the source folder.
I am currently working at adding Volume Rendering to the existing Brayns implementation. Using volumes clearly is the way to go to represent the activity that takes place outside of the geometry. Ray-tracing, on the other hand, concentrates on high quality surface rendering, processing shadows and other global illumination.
Many thanks to my colleagues Raphael and Grigori without whom that development would have taken a few more days!
Currently in my volume branch, but soon to be merged into master!!
Many problems in science and engineering require the ability to grow
tubular or polymeric structures up to large volume fractions within a
bounded region of three-dimensional space. Examples range from the
construction of fibrous materials and biological cells such as neurons,
to the creation of initial configurations for molecular simulations. A
common feature of these problems is the need for the growing structures
to wind throughout space without intersecting. At any time, the growth
of a morphology depends on the current state of all the others, as well
as the environment it is growing in, which makes the problem
computationally intensive. Neuron synthesis has the additional
constraint that the morphologies should reliably resemble biological
cells, which possess nonlocal structural correlations, exhibit high
packing fractions, and whose growth responds to anatomical boundaries in
the synthesis volume. We present a spatial framework for simultaneous
growth of an arbitrary number of nonintersecting morphologies that
presents the growing structures with information on anisotropic and
inhomogeneous properties of the space. The framework is computationally
efficient because intersection detection is linear in the mass of
growing elements up to high volume fractions and versatile because it
provides functionality for environmental growth cues to be accessed by
the growing morphologies. We demonstrate the framework by growing
morphologies of various complexity.