ioioioio.eu is a development and project-site. it contains material about content delivery, streaming
cluster-technology and cloud-behaviour, build with open source tools. the main idea is, to bring the content
as static or passive item to the GUI and take usage of dynamic procedures very rarely. its simply a playground
and sandbox for items, issues and stuff, which is not related to a special topic, but needs a place to
rule the world with the show of force. some stuff is mickey-mouse like and were build during development processes
of something bigger, which should be on-line or in production somewhere else.
static content prefered
the static content is delivered by a distributed cluster, which could be seen as cloud. the files are
located on different sites, in different data-centre. if one request a file, the file is build on
demand. if its smaller then a particular size, its cached by the system for a while. the deliver of the least
amount of bytes to the user, is a consequence of serving the most with the smallest machine. during the development
here, the reduction from some content, e.g. site, is reduced from - lets say - 500 kilobyte to some 150 to 300 bytes,
depending to the amount of objects in a page. the request from the browser is taken on server-side and with some
techniques, the requested material is probably already on the customers side. if so, the response is immediately
send back to the customer, telling the browser the details of that file.
if one takes action and dynamically request something, the response is taken and the result will be
rendered on the cloud-drive, then delivered as static content. that leads to a minimum on server time
and a maximum performance on the response time, back to the client. uploading, adding content in general
will be performed on demand. if the user saves content, the depending files for the media is generated
after being processed. any request from that moment will be redirected of the static content.
data and model
working with a simple data-model was necessary during the evolution of communication. different ideas were
born in the software-world, and further more in the java-galaxy, to maintain only one model for front, back-end
and transmission. data nucleus was a idea to shrink the compiled byte-code down to the minimum. but it
contains annotations, some meta-data above methods or members. that happens also when java persistence api ( jpa )
or hibernate is used. the model is just usable for the non-visible part, the back-end. in the front-end, the same
or abstract similar data is also used in some cases. maintaining two models is worse, when the resource
to build them is not even provided. so in my case. thats why i looked for the chance, to figure out the way,
to work with just one model. with google window tool-kit ( gwt ) everything is written in java. i may trigger the garbage
collector ( gc ) much with that technique, could be true, but i bet some cents on the difference of productivity
during the development and maintenance circles.
spying on each single bit
also relevant is the part, to log and observe the count and amount of action and events, that happen sometimes
on user-interaction. measuring the invisible parts is done in the very background of the system. the near-time
presentation of statistic will bring the exciting moments to the one, who see numbers raising or falling. big-data
is the keyword of that behaviour, as the collection of data is necessary to optimize the user-experience
over the time of existence. securing the data with the first step of handling the password is here another
topic. the usage of "sha 3 keccak" is already been working as gwt-based module and on the server-side, the same way.
that gives latest techniques the chance to being implemented as asymmetric encryption later on.
this page here is also there to test the composition of adobe´s shockwave flash with pure java-script, libraries
for current ajax-frameworks, the design of place-holder and token to create such pages for each customer, that
want to register here, early or lately. having parts directly loaded, when the page is requested is one thing,
reacting on demand with pre-loading and forecasting-methods, is another one. at the moment, the content is wired
hard inside the html-file. in future versions, the load of files will be triggered as late as possible. when the
customer came close to a page-section, which was hidden or not visible, as it was out of focus and scrolling or
linking the part, came during the page-consumption itself.
developing app-like mechanics inside of a "usual browser" with html5 and latest tools, is another fact.
satisfying any kind of browser and any kind of mobile device is observed during the development. having different
behaviour of that devices will raise the effort massively. so, the general - minimum needed - amount of data,
which fits the devices on the different platform is a core-feature of the whole thing. having a rich text editor
for authoring material like this here, would be nice as well, but the stuff i´ve found, disagreed to work
quickly, so a silent minute is needed therefore.
with a stopwatch one could measure and benchmark this site against any other and what ive seen is, that
my old equipment with single-core processors, couple of years old can perform better, then actual machines
with unequal more power. the reason is, that main-functions are build with the count-watch and lowering the needed
time to an acceptable range. delivering the page with its content should be done with one second. then we can talk
about features and the design of the content. its equal, the result is the happy user, that was served with
lightning speed of page creation. having the dom-tree in the back of the mind, the complexity for a grown-up-app
will hurt sometimes to headaches, but the macro picture must rely on that main core features. slow loading
pages are not acceptable to this site, so feel free to leave a comment somehow.
picture carousel, the 11th
the animated thing below is called piecemaker
. its some java-script, a xml-file to deploy the play-list-files
and general settings and the types of transitions to made to a flash-application, that realize the visual changes.
the actual version is 2, i just took the freshest one from the
, as the version before suffered from my page-design on the css-file, with that layout.
i know, flash is not the solution :/
all images, videos and flash-movies inside the piecemaker should be formated to the same size,
that fits the flash-resolution given with 'params'. in that example, the main-resolution is 600x360
gives the button a chance to appear. the image-size-setting inside is 600x300 gives the menue some space.
that presentation of pictures is another way to bring some life to this task. sliding right or left, denoising
the effects are mostly the same, but we will feed that show soon with more interesting material.
the service of the company inc. corp
- conceptual guidance thru all stages
- cross checking alternatives with stable tools
- realisation of implementation on hard and software
- maintenance of core features
- usage of legacy tools
- experience in trouble-shooting
content delivery & streaming
- streaming connection is hidden to customer
- customer connects to phallanx of secondary nodes
- connection on highspeed-gateway
- deliver content from different housing sites
- redundant storage of files
- latest technologies used for client and server
- domain specific language, all in java
- knowledge in php5, java-script, css3 as well
- professional planing tools, based on uml2
- diversified testing of core components
- continuous integration during test<->build<->deploy
below some picked tools from a bunch of helpers ive made for several reasons. sometimes, because the existing ones where to
big or expensive, sometimes i build some for fun, to go for a run with my brain, sometimes in advance, to build a module-based
architecture for upcoming reasons. sometimes i need to play with new libs or want to be prepared for the work, so i code
on the weekend or after works.
the key feature of the 20061208.httpLoader.v.0.43.rar
is, to measure the bandwidth from the connecting client to the server.
beside, the average delivered pages per second is visible on the left hand side of the user-interface. on that screenshot against
ioioioio.eu delivered just 2 pages / sec. by the file-size of ~45kb, that means 300 something kilo-bit. this
time two, makes the raw throughput of 800 kibit, under i/o. 32 pages were loaded, til the point of that screenshot,
that number will raise, until the 160 from pages to load are leeched.
the 20130113.fileIndexer.rar will take place on the mass-storage of that system here. indexing the files after their upload to
a server is just be done to take the fingerprint of a file and to index it with soft meta values. the side effect is
that the bandwidth of the drive could be seen below. a standard server disc with 5400 rpm will just deliver something like
80 gibyte/s at the best. with fragmented and nearly used space, the value decrease dramatically. the file below was
checked with 12 mibyte/s. the performance is proportinal related to the filesize. smaller files are slower to index, then larger
ones. therefore its planned to capture the incomming files quickly and let the whole indexing on the table of some batch-process.
that could be triggered on demand or hourly, for example.
short description of output:
used: 134.855.824 actual free: 399.131.504 from: 533.987.328 out of: 1.431.699.456
134mb used by default vm-setting, 399mb free, 512mb reserved, 1.2gib with heapspace
read file: d:\temp\filesToIndex\file3.mp3
index the given file by name from folderView
hashing of: file3.mp3 took: 516.919.834 ns. with: 12.819.812 byte/sec.
'file3.mp3' needed 516 millisecond with 12.8 mibyte throughput
sys.gc took: 16 milli
system garbage collection took 16 ms.
rt.gc took: 12 milli
runtime garbage collection took 16 ms.
FileInfoWrapper [fileSize=128198120, fileCreatedDate=0, fileChangedDate=1315569295312, fileName=file3.mp3, fileHash=-85764419]"
the file related data is: 12.8 mbyte size, changed lateley, fileName was visible, the hash of the fileContent is -85764419
that file should produce the same hash-value in each run, or something is wrong and were not amused.
further jobs will raise soon, some are to hot to describe here ... the picture for consulting is definitely
worth to be exchanged, it came from a calculation for noise-protection. shielding a wall with just two sides
to skew on, has to be planned. that is the raw painting of it, during the talk to a construction engineer.
painting diagrams helps that one much, who learn out of pictures. text-based-learner may be confused by the
image, without any description. audio-learning people need to have one, that teach them it all, or otherwise,
learn with foreign-language-tapes. the combination of that different techniques, makes the most. seeing it all
rational would fit the scenario, were humans and therefore, we produce errors, making faults and more or less
service will conduct and connect that weak and strong parts, to bound them together, to shift the difference away.
some services, like escorting is a delicate topic, that not everyone wants to join, during a discussion. bringing
contact remotely with web-cams, is also a business-case, that makes a lot
you will find there some reason to proof that concept, beside the topics of:
best ingredients from
that glue is partly implemted here:
- nginx webserver 0.7.x
- google window toolkit 2.5.1
- apache tomcat 7
- apache hadoop 1.1
- apache hbase 0.9
- apache commons 3
- apache comet 0.x
- apache upload 1.x
beside that java.libs, some java-script.frameworks are worth to be named:
- jquery 1.10.1
- piecemaker 2
- lightbox 2
some examples of audio renderings out of weekendly sessions with the project-title:
is a cross-mix of joint-cooperations between me and others, and others under another as well.
took part in the widest meaning of fairly saying, thats not just the work of a single person.
the production, which takes uses of a pc based daw and a bunch of plugins on the vst and midi interface.
the mixdown nor the story of each snippet there is finished, so be prepared for boring or
at least interesting stuff on that subpage.
app & portlet
two sandbox apps as portlets, the project-title:
can create downloadable links from clipboard and load data into a table. the first
app is a helper, when surfing the web and a link pops up, but not possibility to
right-click and save the file. just copy->paste the url into it and a correct link will
appear. the second thing was a try to make the java performance test applet results more
nice, with sorted table, resizing mechanism and offloaded content for deliver on demand.
a use case to save costs and performance on the server side.
with passive xml-xhttp-request.
imagine a list of genres, out of the world of music. one genre contains several artists,
which psrticipated in it somehow. having three artists and their biographie, discographie( gig list ) or
something like that listed, is delivered through static xml files, residing inside the project folder.
each time a file is requested twice, the request call will finger the url from the given link and check
the local store inside of java-script for the existence. if so, the result is immediately pushed to
the render process and no server-request will last again on the requesting latency. that cache-lookup will be
triggered, each time, a xml-file is needed. that saves a lot of bandwith on the server.
if the customer take advantage of a local browser-cache, the xml-file will stay over a period of time and
if he or she is comming back within that time, the result is rendered without download the file again. then
js-caching will takes the scope of the next requests and that results to a massive reduction of traffic.
the time for a roundtrip differs on your internet-connection. dsl as example supports the user with
a latency around 20-30 ms. in one direction. the same time at least, for the way of signal back the line, makes
40-60ms. as usual delay to the world. the time on the server is mostly counted in two diggits of milliseconds.
cached items could be delivered within microseconds, inside the customer-context, reading it from local disc
will blow the time ( its a vhost ) up to some hundret milliseconds. requesting the file from the cluster-drive
could take mean times of half a second, depending on the size. were talking about items with no more
then 1mb as conent, in bytes.
(c) 1999 Jascha Buder
in his third working life, he likes to work with java mostly, after seeing its capability to work
out things for multiple tiers in a chain or process of functionality within systems. 20 years of
working and learning leads him to build a content delivery system as home-project. many cores and
many customers are satisfied with these tools and solutions.
(c) 1999 Jascha Buder
got plenty skillz from js and php over mysql, css and html. building typo3 sites behind the installation
of a virtual machine e.g. vhost, is commong to him. advancing the system with own plugins and scripts
is the daylie business. 15 years of experience is what u can guess.
(c) 1999 Jascha Buder
there is not much known about him, it could be you!
were looking for people with the capacity for
php/js/java/jee on content management / delivery sites. topics like big-data, ajax, remote procedures
or objects are familiar to you. then join our team soon and
welcome page: ioioioio.eu
postal code: d-10369