I had been thinking of my Touch Table project for a long time. My research on existing solutions was a bit disappointing: mostly insanely expensive, large, or platform locked, they did not fit my vision of a [Android or Linux-powered] ‘desktop’ that would allow me to fit it into my existing workflow, rather than hope that applications would support it (like the Microsoft Surface).
July 01, 2015 02:55 AM
The madness is over. FUDCon Pune 2015 happened between 26-28 June 2015, and we successfully hosted a large number of people at MIT College of Engineering. This was not without challenges though and we met yesterday to understand what went well for us (i.e. the FUDCon volunteer team) and what could have been better. This post however is not just a summary of that discussion, since it is heavily coloured by my own impression of how we planned and executed the event.
Our bid was pretty easy to get together because we had a pretty strong organizer group at the outset and we more or less knew exactly what we wanted to do. We wanted to do a developer focussed conference that users could attend and hopefully become contributors to the Fedora project. The definition of developer is a bit liberal here, to mean any contributor who can pitch in to the Fedora project in any capacity. The only competing bid was from Phnom Penh and it wasn’t a serious competition by any stretch of imagination since its only opposition to our bid was “India has had many FUDCons before”. That combined with some serious problems with their bid (primarily cash management related) meant that Pune was the obvious choice. We had trouble getting an official verdict on the bid due to Christmas vacations in the West, but we finally had a positive verdict in January.
The call for participants went out almost immediately after the bid verdict was announced. We gave about a month for people to submit their proposals and once we did that, a lot of us set out pinging individuals and organizations within the Open Source community. This worked because we got 142 proposals, much more than we had imagined.
We had set out with the idea of doing just 3 parallel tracks because some of us were of the opinion that more tracks would simply reduce what an individual could take away from the conference. This also meant that we had at most 40 slots with workshops taking up 2 slots instead of 1.
The website took up most of my time and in hindsight, it was time that I could have put elsewhere. We struggled with Drupal as none of us knew how to wrangle it. I took the brave (foolhardy?) task of upgrading the Drupal instance and migrating all of the content, only to find out that the schedule view was terrible and incredibly non-intuitive. I don’t blame Drupal or COD for it though; I am pretty sure I missed something obvious. SaniSoft came to the rescue though and we were able to host our schedule at shdlr.com.
After the amazing response in the CfP, we were tempted to increase the number of tracks since a lot of submissions looked very promising. However, we held on tight and went about making a short list. After a lot of discussions, we finally gave in to the idea of making a separate workshop track and after even more discussions, we separated out a Container track, a Distributed Storage track and an OpenStack track. So all of a sudden, we now had 5 tracks in a day instead of 3!
Sankarshan continually reminded me to reach out to speakers at the event to make sure that their talk fit in with our goals. I could not do that, mainly because we did not have the bandwidth but also because I realize that in hindsight, our goal wasn’t refined beyond the fact that we wanted a more technical event. The result was that we made a couple of poor choices, the most notable being the opening keynote of the conference. The talk about Delivering Fedora for everyone was an excellent submission, but all of us misunderstood the content of the talk. The talk was a lot more focussed than we had thought it would be and it ended up being the wrong beginning for the conference since it seemed to scare away a lot of students.
The content profile overall however was pretty strong and most individual talks had almost full rooms. The auditorium looked empty for a lot of talks, but that was because each row of the massive auditorium could house 26 people, so even a hundred people in the auditorium filled in only the first few rows. The kernel talks had full houses and the Container, OpenStack and Storage tracks were packed. It was heartening to see some talks where many in the audience followed the speaker out to discuss the topic further with them.
One clear failure on the content front was the Barcamp idea. We did a poor job of planning it and an even poorer job of executing it.
Travel, Accommodation and Commute
We did a great job on travel and accommodation planning and execution. Travel subsidy arrangements were well planned and announced and we had regular meetings to decide on them. Accommodation was negotiated and booked well in advance and we had little issues on that front except occasionally overloaded network at the hotel. We had excellent support for visa applications as well as making sure that speakers were picked up and dropped to the airport on time. The venue was far from the hotel, so we had buses to ferry everyone across. Although that was tiring, it was done with perfect precision and we had no unpleasant surprises in the end.
Materials, Goodies and SWAG
We had over 2 months from the close of CfP to conference day, and we wasted a lot of that time when we should have been ordering and readying swag. This is probably the biggest mistake we had made in planning and it bit us quite hard near the closing weeks. We had a vendor bailing on us near the end, leading to a scramble to Raviwar Peth to try and get people to make us stuff in just over a week. We were lucky to find such vendors, but we ended up making some compromises in quality. Not in t-shirts though, since that was an old reliable vendor that we had forgotten about during the original quote-collection. He worked night and day and delivered the t-shirts and socks despite the heavy Mumbai rains.
The design team was amazing with their quick responses to our requests and made sure we had the artwork we needed. They worked with some unreasonable deadlines and demands and came out on top on all of them. The best part was getting the opportunity to host all of them together on the final day of the conference and doing a Design track where they did sessions on Inkscape, Blender and GIMP.
We struggled with some basic things with the print vendor like sizes and colours, but we were able to fix most of those problems in time.
We settled on MIT College of Engineering as the venue after considering 2 other colleges. We did not want to do the event at COEP again since they hosted the event in 2011. They had done really well, but we wanted to give another college the opportunity to host the event. I had been to MIT weeks earlier as a speaker at their technical event call Teknothon and found their students to be pretty involved in Open Source and technology in general, so it seemed natural to refer them as potential hosts. MITCOE were very positive and were willing to become hosts. With a large auditorium and acceptably good facilities, we finalized MITCOE as our venue of choice.
One of the major issues with the venue though was the layout of the session rooms. We had an auditorium, classrooms on the second floor of another building and classrooms on the 4th floor of the same building. The biggest trouble was getting from the auditorium to that other building and back. The passages were confusing and a lot of people struggled to get from one section to the other. We had put up signs, but they clearly weren’t good enough and some people just gave up and sat wherever they were. I don’t know if people left out of frustration; I hope they didn’t.
The facilities were pretty basic, but the volunteers and staff did their best to work around that. WiFi did not work on the first two days, but the internet connection for streaming talks from the main tracks worked and there were a number of people following the conference remotely.
HasGeek pitched in with videography for the main tracks and they were amazing throughout the 3 days. There were some issues on the first day in the auditorium, but they were fixed and the remainder of the conference went pretty smoothly. We also had a couple of laptops to record (but not stream) talks in other tracks. We haven’t reviewed their quality yet, so the jury is still out on how useful they were.
Volunteers and Outreach
While our CfP outreach was active and got good results, our outreach in general left a lot to be desired. Our efforts to engage student volunteers and the college were more or less non-existent until the last days of the conference. We spoke to our volunteers the first time only a couple of days before the conference and as expected, many of the volunteers did not even know what to expect from us or the conference. This meant that there was barely any connect between us.
Likewise, our media efforts were very weak. Our presence in social media was not worth talking about and we only reached out to other colleges and organizations in the last weeks of the conference. Again, we did not invest any efforts in engaging organizations to try and form a community around us. We did have a twitter outreach campaign in the last weeks, but the content of the tweets actually ended up annoying more people than making a positive difference. We failed to engage speakers to talk about their content or share teasers to build interest for their sessions.
Best. FUDPub. Ever.
After looking at some conventional venues (i.e. typical dinner and drinks places) for dinner and FUDPub, we finally settled for the idea of having the social event at a bowling arcade. Our hosts were Blu’O at the Phoenix Market City mall. The venue had everything from bowling to pool tables, from karaoke rooms to a dance floor. It had everything for everyone and everyone seemed to enjoy it immensely. I know I did, despite my arm almost falling off the next day :)
We had an approval for up to $15,000 from the Fedora budget and we got support from a couple of other Red Hat departments for $5,000 each, giving us a total room of $25,000. The final picture on the budget consumption is still work in progress as we sort out all of the bills and make reimbursements in the coming weeks. I will write another blog post describing that in detail, and also how we managed and monitored the budget over the course of the execution.
We did a pretty decent event this time and it seemed like a lot of attendees enjoyed the content a lot. We could have done a lot better on the venue front, but the efforts from the staff and volunteers were commendable. Would I do this again? maybe not, but that has more to do with wanting to get back to programming again than with the event organization itself. Setting up such a major conference is a lot of work and things only get better with practice. Occasional organizers like yours truly cannot do justice to a conference of this size if they were to do it just once every five years. This probably calls for a dedicated team that does such events.
There were also questions of whether such large conferences were relevant anymore. Some stated their preference for micro-conferences that focussed on a specific subset of the technology landscape, but others argued that having 10 conferences for 10 different technologies was taxing for budgets since it is not uncommon for an individual to be interested in more than 1 technology. In any case, this will shape the future of FUDCon and maybe even Flock, since with such a concentration of focus, Flock could end up becoming a meetup where contributors talk only about governance issues and matters specific to the Fedora project and not the broader technology spectrum that makes Fedora products.
In the end though, FUDCon is where I made friends in 2011 and again, it was the same in 2015. The conference brought people from different projects together and I got to know a lot of very interesting people. But most of all, the friends I made within our volunteer team were the biggest takeaway from the event. We did everything together, we fought and we supported each other when it mattered. There may be things I would have done differently if I did this again, but I would not have asked for a different set of people to work with.July 01, 2015 01:32 AM
OPEN SOURCE is key for humanity to preserve its history in the digital age, Vatican Library CIO Luciano Ammenti has argued.
“The Vatican Library is a conservation library. We try to preserve our history. We tried to expand the number of reading rooms available for people that want to use our library,” he said.
“But we realised that reading rooms will never be enough. We have 82,000 manuscripts in total, and at any one time only 20 percent of them can be read in the library.June 30, 2015 10:20 AM
KDE touchpad configuration module supports both Libinput touchpad driver and Synaptics driver. Newer versions of distros like Fedora 22 comes with both libinput and synaptics drivers installed, where libinput driver is chosen by default for touchpads. Some users wanted to use synaptics driver and tweak all options exported by it using the touchpad KDE control module. To do so, simply uninstall the libinput driver (
xorg-x11-drv-libinput) and touchpad kcm uses synaptics driver which makes all the kcm options tweak-able. Some of those users reported that after uninstalling libinput driver but keeping synaptics driver (
xorg-x11-drv-synaptics), touchpad KCM displayed the error message “No touchpad found” and no options were editable as reported in this bug.
This wasn’t easily reproducible in my system though I have seen it once or twice. On a fresh Fedora 22 KDE spin installation which comes with both libinput and synaptics drivers, I was able to reproduce the issue by simply uninstalling libinput driver which helped to debug the issue. The
XlibBackendclass first checked for the presence of X atom “
libinput Tapping Enabled” to determine if libinput driver is active. In that case, the
XlibLibinputBackendwas instantiated which handled the configuration. Otherwise, fallback to synaptics driver and instantiate
The issue, turns out that X atom “
libinput Tapping Enabled” is active even after libinput driver is uninstalled! This was verified by checking the list of initialized atoms, with a nimble tool “
xlsatoms” from the
xorg-x11-utilspackage. With and without libinput driver installed, the output of this command were something like:
$ xlsatoms | grep -i tap
316 libinput Tapping Enabled
$ dnf remove xorg-x11-drv-libinput
(logout/restart and login again for X to use synaptics driver)
$ xlsatoms | grep -i tap
313 libinput Tapping Enabled
342 synaptics Tap Action
Which clearly shows the libinput atom is active even when driver is not installed. That caused the KCM code to try to instantiate
XlibLibinputBackendwhich is non-existent and fails with error message “No touchpad found”. This seems to be a bug in Clutter, Mutter and Gtk+ as found out in this Fedora bug ‘touchpad not found’ . Those toolkits inadvertently created this atom while the intention was to check its existence; but I don’t know if kcm_touchpad code was also creating this atom.
With that finding,
kcm_touchpadcode is revised to first instantiate
XlibLibinputBackendand checks for failures. If it fails, we try to instantiate the
XlibSynapticsBackend. It is a small fix, yet solves an issue that affected many users. This fix is confirmed by some testers and is now pushed to
plasma-desktop. The code adds a couple of error messages, so it is not available in 5.3.2 release but will be available in 5.4.0.
Tagged: hacking, kdeJune 27, 2015 12:40 PM
ഈ വരുന്ന ജൂണ് 30 നു് ഒരു പ്രത്യേകതയുണ്ടു്. ആ ദിവസത്തിന്റെ ദൈര്ഘ്യം 24 മണിക്കൂറും ഒരു സെക്കന്റും ആണു്. അധികം വരുന്ന ഈ ഒരു സെക്കന്റിനെ ലീപ് സെക്കന്റ് അല്ലെങ്കില് അധിക നിമിഷം എന്നാണു് വിളിക്കുന്നതു്. നമ്മള് സാധാരണ ഉപയോഗിക്കുന്ന കൈയില് കെട്ടുന്ന വാച്ചുകളിലോ ചുമര് ക്ലോക്കുകളിലോ ഒന്നും ഇതു കണ്ടെന്നു വരില്ല. അല്ലെങ്കിലും ഒരു സെക്കന്റിനൊക്കെ നമുക്കെന്തു വില അല്ലേ? പക്ഷേ അങ്ങനെ തള്ളിക്കളയാനാവില്ല ഈ അധിക സെക്കന്റിനെ. സെക്കന്റ് അളവില് കൃത്യത ആവശ്യമായ കമ്പ്യൂട്ടറുകളിലും ഉപകരണങ്ങളിലും ഇതു പ്രശ്നമുണ്ടാക്കാനുള്ള സാധ്യത വളരെ കൂടുതലായതുകൊണ്ടു് ജൂണ് 30, 11 മണി, 60 സെക്കന്റ് എന്ന സമയത്തെ, എന്നാല് ജൂലൈ 1 ആവാത്ത ആ നിമിഷത്തെ, നേരിടാന് ലോകമെമ്പാടുമുള്ള സാങ്കേതിക വിദഗ്ദ്ധര് കരുതിയിരിക്കുന്നു.
ഈ അധിക നിമിഷം എവിടെനിന്നു വന്നു? വളരെ ചുരുക്കിപ്പറഞ്ഞാല് ഭൂമിയുടെ കറക്കത്തിന്റെ വേഗത എല്ലാ കാലത്തും ഒരുപോലെയല്ലാത്തതുകൊണ്ടാണു് ഈ അഡ്ജസ്റ്റ് മെന്റ് വേണ്ടിവരുന്നതു്. ഭൂമിയുടെ കറക്കത്തിന്റെ വേഗത കുറയാന് ഭൌമപാളികളുടെ ചലനങ്ങള് അല്ലെങ്കില് ഭൂചലനങ്ങള് പ്രധാനകാരണമാണു് . ഭൂമിയുടെ കറക്കത്തെ അടിസ്ഥാനമാക്കി ഒരു ദിവസത്തെ 24 മണിക്കൂറുകളായും ഒരു മണിക്കൂറിനെ 60 മിനിറ്റായും ഓരോ മിനിറ്റിനെയും 60 സെക്കന്റായും വിഭജിച്ചാണല്ലോ നമ്മുടെ സമയം. ഇതിനെ ആസ്ട്രോണമിക്കല് സമയം എന്നും വിളിക്കാം. പക്ഷേ കൃത്യതയാര്ന്ന സെക്കന്റിന്റെ നിര്വചനം ഈ വിഭജനങ്ങളെ ആസ്പദമാക്കിയല്ല ചെയ്തിരിക്കുന്നതു്. ഒരു സീസിയം-133 ആറ്റം, സ്ഥിരാവസ്ഥയിലിരിക്കുമ്പോൾ (Ground State) അതിന്റെ രണ്ട് അതിസൂക്ഷ്മസ്തരങ്ങൾ (Hyper Levels) തമ്മിലുള്ള മാറ്റത്തിനനുസരിച്ചുള്ള വികിരണത്തിന്റെ സമയദൈർഘ്യത്തിന്റെ, 9,192,631,770 മടങ്ങ് എന്നാണു് സെക്കന്റിന്റെ ശാസ്ത്രീയവും ഔദ്യോഗികവുമായ നിര്വചനം.
ലോകത്തിലെ ക്ലോക്കുകളെല്ലാം കൃത്യസമയം പാലിക്കുന്നതു് കോര്ഡിനേറ്റഡ് യൂണിവേഴ്സല് ടൈം (UTC) സ്റ്റാന്ഡേഡ് അനുസരിച്ചാണു്. ഇതിനെ ആസ്പദമാക്കിയാണു് സമയമേഖലകളില്( Timezones) സമയം കണക്കാക്കുന്നതും കമ്പ്യൂട്ടറുകളിലെ സമയക്രമീകരണവും. ഗ്രീനിച്ച് മാനക സമയമടിസ്ഥാനമാക്കി ശാസ്ത്രലോകം അംഗീകരിച്ച സമയഗണനസമ്പ്രദായമാണു് UTC. ഇന്ത്യയിലെ സമയമേഖല UTC+5.30 എന്നാണു് കുറിക്കാറുള്ളതു്. ഗ്രീനിച്ച് സമയത്തില് നിന്നും അഞ്ചരമണിക്കൂര് കൂടുതല് എന്ന അര്ത്ഥത്തില്. 1972 മുതല് UTC, ഇന്റര്നാഷണല് അറ്റോമിക് ടൈമിനെ പിന്തുടരുന്നു. ഇന്റര്നാഷണല് അറ്റോമിക് ടൈം സീസിയം ആറ്റത്തിന്റെ വികിരണത്തെ അടിസ്ഥാനമാക്കിയാണു്.
ഞാന് തുടക്കത്തില് പറഞ്ഞ ജൂണ് 30 നു് അധിക സെക്കന്റ് എന്നതു് UTC സമയമാണെന്നു വ്യക്തമാക്കട്ടെ. ശരിക്കും ഇന്ത്യയിലപ്പോള് ജൂലൈ 1 രാവിലെ 5.30 ആയിരിക്കും.
നിത്യജീവിതത്തിലെ സമയം എന്ന ആശയം രാത്രി-പകല് മാറ്റങ്ങളെ അടിസ്ഥാനമാക്കിയാണല്ലോ. UTC യും നിത്യജീവിതത്തിലെ ആവശ്യങ്ങള്ക്കുള്ളതായതുകൊണ്ടു് ഒരേ സമയം അറ്റോമിക് ടൈമിന്റെ കൃത്യത പാലിക്കാനും അതേ സമയം ഭൂമിയുടെ കറക്കത്തിനൊപ്പമാവാനും വേണ്ടിയാണു് ഇടക്ക് ഇങ്ങനെ സെക്കന്റുകള് ചേര്ക്കുന്നതു്. ഇങ്ങനത്തെ 26-ാമത്തെ അഡ്ജസ്റ്റ്മെന്റ് ആണു് 2015 ജൂണ് 30നു നടക്കാന് പോകുന്നതു്. 2012 ജൂണ് 30നായിരുന്നു അവസാനമായി ലീപ് സെക്കന്റ് വന്നതു്.
കൃത്യമായി പറഞ്ഞാല് ജൂണ് 30നു് പതിനൊന്നുമണി 59 മിനിറ്റ്, 59 സെക്കന്റ് കഴിഞ്ഞാല് ജൂലൈ 1, 00:00:00 സമയം ആവേണ്ടതിനു പകരം ജൂണ് 30, 11 മണി, 59 മിനിറ്റ്, 60 സെക്കന്റ് എന്ന സമയം നില്നില്ക്കും. അതിനു ശേഷമേ ജൂലൈ ആവൂ.
ലീപ് സെക്കന്റ് കുഴപ്പക്കാരനാവുന്നതു് പല രീതികളിലാണു്. കമ്പ്യൂട്ടറുകളില് ഏതുതരത്തിലുള്ള ഓപ്പറേഷനുകളുടെ രേഖീയ ക്രമം(linear sequencing) ടൈം സ്റ്റാമ്പുകളെ അടിസ്ഥാനമാക്കിയാണു്. ഓപ്പറേറ്റിങ്ങ് സിസ്റ്റമാണു് ഈ മിടിപ്പുകള്(ticks) ഉണ്ടാക്കിക്കൊണ്ടു് അതിനുമുകളിലെ അപ്ലിക്കേഷനുകളെ സഹായിക്കുന്നതു്. മിടിപ്പുകളുടെ എണ്ണം മിനിറ്റ്, മണിക്കൂര്, ദിവസം ഒക്കെ കണക്കാക്കാന് ഉപയോഗിക്കുമെന്നു പ്രത്യേകം പറയേണ്ടതില്ലല്ലോ. 12:59:60 നു ജൂലൈ ഒന്നാണോ ജൂണ് 30 ആണോ തുടങ്ങിയ കണ്ഫ്യൂഷന് മുതല് എന്തൊക്കെ തരത്തിലുള്ള പ്രശ്നമാണു് ഇവ ഉണ്ടാക്കുന്നതെന്നു പറയാന് കഴിയില്ല. ലിനക്സ് കെര്ണലില് ഇതു കൈകാര്യം ചെയ്യാനുള്ള സംവിധാനം ഉണ്ടായിരുന്നെങ്കിലും 2012ലെ ലീപ് സെക്കന്റ് സമയത്തു് അതു് നേരാവണ്ണം പ്രവര്ത്തിച്ചില്ല. ജൂണ് 30നു ന്യൂയോര്ക്ക് സ്റ്റോക് എക്ചേഞ്ച് ഒരു മണിക്കൂറോളം പ്രവര്ത്തനം നിര്ത്തുമെന്നു് അറിയിച്ചു കഴിഞ്ഞു.
വലിയ വെബ്സൈറ്റുകള് ലീപ് സെക്കന്റിനെ നേരിടാന് തയ്യാറെടുത്തുകഴിഞ്ഞു. വിക്കിപീഡീയ അതിന്റെ സെര്വറുകളില് UTC ടൈമുമായുള്ള ഏകോപനം താത്കാലികമായി നിര്ത്തിവെച്ചു് ഹാര്ഡ്വെയര് ക്ലോക്കില് സെര്വറുകള് ഓടിക്കും. ലീപ് സെക്കന്റ് ഒക്കെ കഴിഞ്ഞ ശേഷം സെര്വറുകളെ പല ഘട്ടങ്ങളിലായി വീണ്ടും UTC യുമായി ഏകോപിപ്പിക്കും. ഗൂഗിള് വേറൊരു രീതിയാണു് ഉപയോഗിക്കുന്നതു്. അവര് ലീപ് സെക്കന്റിനോടടുത്തു് വരുന്ന സെക്കന്റുകളെ കുറേശ്ശേ വലിച്ചു നീട്ടും, ചില്ലറ മില്ലി സെക്കന്റുകള് അധികമുള്ള സെക്കന്റുകള് എല്ലാം കൂടി കൂട്ടിവെച്ചാല് ഒരു സെക്കന്റിന്റെ ഗുണം ചെയ്യും, അതേ സമയം പുതിയൊരു സെക്കന്റിന്റെ വരവ് ഇല്ലാതാക്കുകയും ചെയ്യും.
ഈ തലവേദന എങ്ങനെയെങ്കിലും ഒഴിവാക്കാനുള്ള ചര്ച്ചകളും ആരംഭിച്ചിട്ടുണ്ടു്. ഭൂമിയില് നമ്മള് ലീപ് സെക്കന്റ് കണക്കാക്കിയാലും നമ്മുടെ ബഹിരാകാശ നിരീക്ഷണങ്ങള്ക്കു് ആസ്ട്രോണമിക്കല് ക്ലോക്ക് തന്നെ വേണമല്ലോ. ലീപ് സെക്കന്റ് എന്നു വേണം എന്നു് ഏകദേശം ആറുമാസം മുമ്പേ തീരുമാനിക്കാനും പറ്റു. International Earth Rotation and Reference Systems Service (IERS) ആണു് ലീപ് സെക്കന്റ് എപ്പോള് വേണമെന്നു തീരുമാനിക്കുന്നതു്.
കൂടുതല് വായനയ്ക്ക്: https://en.wikipedia.org/wiki/Leap_secondJune 27, 2015 06:08 AM
૧. જ્હોન વિક
ફાઇનલી. કેનુ રિવ્સની એક સારી ફિલમ. આ ભાઈનું નામ વિક, પણ બહુ મજબૂત. ફૂલ મારા-મ-મારી અને સરસ પ્રવાહી વાર્તા. આમેય, રશિયનોની પિટાઈ થતી હોય એવી ફિલમો આપણને ગમે :)
જ્યુરાસિક વર્લ્ડએ પેલાં જ્યુરાસિક પાર્કોનો ચોથો ભાગ જ છે. એક્સ્ટ્રામાં જિનેટિકલી મોડિફાઇડ ડાયનોસોર્સ છે. થોડી આધુનિક ટેકનિક્સ છે. છેલ્લી ફાઇટિંગ સારી છે. પણ જ્યુરાસિક પાર્ક અને ધ લોસ્ટ વર્લ્ડ જેવી મજા મને ન આવી. કવિનને મજા આવી કારણ કે ડાયનોસોર્સનું તેનું પહેલું મુવી હતું (થિએટરમાં).
બન્ને જોવાલાયક. પહેલી ફેમિલી વગર, બીજી ફેમિલી સાથે.
June 23, 2015 02:40 PM
It’s less than 72 hours to go for the much awaited FUDCon APAC 2015 kick off. International delegates are boarding flights as we speak, while others are packing bags and preparing for the take off. The organising team on ground zero is running full throttle and leaving no stones unturned to ensure smooth sailing. :)
I’m packing my bags and gearing up for my talk “Introduction to DNSSEC – F22 feature“. Do drop in and join the conversation.
See you there…!!! :)
June 23, 2015 07:56 AM
[Published in Open Source For You (OSFY) magazine, September 2014 edition.]
In the third article in the series, we will focus on more Haskell functions, conditional constructs and their usage.
A function in Haskell has the function name followed by arguments. An infix operator function has operands on either side of it. A simple infix add operation is shown below:
*Main> 3 + 5 8
If you wish to convert an infix function to a prefix function, it must be enclosed within parenthesis:
*Main> (+) 3 5 8
Similarily, if you wish to convert a prefix function into an infix function, you must enclose the function name within backquotes(`). The elem function takes an element and a list, and returns true if the element is a member of the list:
*Main> 3 `elem` [1, 2, 3] True *Main> 4 `elem` [1, 2, 3] False
Functions can also be partially applied in Haskell. A function that subtracts ten from a given number can be defined as:
diffTen :: Integer -> Integer diffTen = (10 -)
Loading the file in GHCi and passing three as an argument yields:
*Main> diffTen 3 7
Haskell exhibits polymorphism. A type variable in a function is said to be polymorphic if it can take any type. Consider the last function that returns the last element in an array. Its type signature is:
*Main> :t last last :: [a] -> a
The ‘a’ in the above snippet refers to a type variable and can represent any type. Thus, the last function can operate on a list of integers or characters (string):
*Main> last [1, 2, 3, 4, 5] 5 *Main> last "Hello, World" 'd'
You can use a where clause for local definitions inside a function, as shown in the following example, to compute the area of a circle:
areaOfCircle :: Float -> Float areaOfCircle radius = pi * radius * radius where pi = 3.1415
Loading it in GHCi and computing the area for radius 1 gives:
*Main> areaOfCircle 1 3.1415
You can also use the let expression with the in statement to compute the area of a circle:
areaOfCircle :: Float -> Float areaOfCircle radius = let pi = 3.1415 in pi * radius * radius
Executing the above with input radius 1 gives:
*Main> areaOfCircle 1 3.1415
Indentation is very important in Haskell as it helps in code readability - the compiler will emit errors otherwise. You must make use of white spaces instead of tab when aligning code. If the let and in constructs in a function span multiple lines, they must be aligned vertically as shown below:
compute :: Integer -> Integer -> Integer compute x y = let a = x + 1 b = y + 2 in a * b
Loading the example with GHCi, you get the following output:
*Main> compute 1 2 8
Similarily, the if and else constructs must be neatly aligned. The else statement is mandatory in Haskell. For example:
sign :: Integer -> String sign x = if x > 0 then "Positive" else if x < 0 then "Negative" else "Zero"
Running the example with GHCi, you get:
*Main> sign 0 "Zero" *Main> sign 1 "Positive" *Main> sign (-1) "Negative"
The case construct can be used for pattern matching against possible expression values. It needs to be combined with the of keyword. The different values need to be aligned and the resulting action must be specified after the ’->’ symbol for every case. For example:
sign :: Integer -> String sign x = case compare x 0 of LT -> "Negative" GT -> "Positive" EQ -> "Zero"
The compare function compares two arguments and returns LT if the first argument is lesser than the second, GT if the first argument is greater than the second, and EQ if both are equal. Executing the above example, you get:
*Main> sign 2 "Positive" *Main> sign 0 "Zero" *Main> sign (-2) "Negative"
The sign function can also be expressed using guards (‘|’) for readability. The action for a matching case must be specified after the ‘=’ sign. You can use a default guard with the otherwise keyword:
sign :: Integer -> String sign x | x > 0 = "Positive" | x < 0 = "Negative" | otherwise = "Zero"
The guards have to be neatly aligned:
*Main> sign 0 "Zero" *Main> sign 3 "Positive" *Main> sign (-3) "Negative"
There are three very important higher order functions in Haskell — map, filter, and fold.
The map function takes a function and a list, and applies the function to each and every element of the list. Its type signature is:
*Main> :t map map :: (a -> b) -> [a] -> [b]
The first function argument accepts an element of type ‘a’ and returns an element of type ‘b’. An example on adding two to every element in a list can be implemented using map:
*Main> map (+ 2) [1, 2, 3, 4, 5] [3,4,5,6,7]
The filter function accepts a predicate function for evaluation, and a list, and returns the list with those elements that satisfy the predicate. For example:
*Main> filter (> 0) [-2, -1, 0, 1, 2] [1,2]
Its type signature is:
filter :: (a -> Bool) -> [a] -> [a]
The predicate function for filter takes as its first argument an element of type ‘a’ and returns True or False.
The fold function performs cumulative operation on a list. It takes as arguments a function, an accumulator (starting with an initial value) and a list. It cumulatively aggregates the computation of the function on the accumulator value as well as each member of the list. There are two types of folds — left and right fold.
*Main> foldl (+) 0 [1, 2, 3, 4, 5] 15 *Main> foldr (+) 0 [1, 2, 3, 4, 5] 15
Their type signatures are, respectively:
*Main> :t foldl foldl :: (a -> b -> a) -> a -> [b] -> a *Main> :t foldr foldr :: (a -> b -> b) -> b -> [a] -> b
The way the fold is evaluated among the two types is different and is demonstrated below:
*Main> foldl (+) 0 [1, 2, 3] 6 *Main> foldl (+) 1 [2, 3] 6 *Main> foldl (+) 3  6
It can be represented as ‘f (f (f a b1) b2) b3’ where ‘f’ is the function, ‘a’ is the accumulator value, and ‘b1’, ‘b2’ and ‘b3’ are the elements of the list. The parenthesis is accumulated on the left for a left fold. The computation looks like:
*Main> (+) ((+) ((+) 0 1) 2) 3 6 *Main> (+) 0 1 1 *Main> (+) ((+) 0 1) 2 3 *Main> (+) ((+) ((+) 0 1) 2) 3 6
With the recursion, the expression is constructed and evaluated only when the expression is finally formed. It can thus cause stack overflow or never complete when working with infinite lists. The foldr evaluation looks like this:
*Main> foldr (+) 0 [1, 2, 3] 6 *Main> foldr (+) 0 [1, 2] + 3 6 *Main> foldr (+) 0  + 2 + 3 6
It can be represented as ‘f b1 (f b2 (f b3 a))’ where ‘f’ is the function, ‘a’ is the accumulator value, and ‘b1’, ‘b2’ and ‘b3’ are the elements of the list. The computation looks like:
*Main> (+) 1 ((+) 2 ((+) 3 0)) 6 *Main> (+) 3 0 3 *Main> (+) 2 ((+) 3 0) 5 *Main> (+) 1 ((+) 2 ((+) 3 0)) 6
There are some statements like condition checking where ‘f b1’ can be computed even without requiring the subsequent arguments, and hence the foldr function can work with infinite lists. There is also a strict version of foldl (foldl’) that forces the computation before proceeding with the recursion.
If you want a reference to a matched pattern, you can use the as pattern syntax. The tail function accepts an input list and returns everything except the head of the list. You can write a tailString function that accepts a string as input and returns the string with the first character removed:
tailString :: String -> String tailString "" = "" tailString input@(x:xs) = "Tail of " ++ input ++ " is " ++ xs
The entire matched pattern is represented by input in the above code snippet.
Functions can be chained to create other functions. This is called as ‘composing’ functions. The mathematical definition is as under:
(f o g)(x) = f(g(x))
This dot (.) operator has the highest precedence and is left-associative. If you want to force an evaluation, you can use the function application operator ($) that has the second highest precedence and is right-associative. For example:
*Main> (reverse ((++) "yrruC " (unwords ["skoorB", "lleksaH"]))) "Haskell Brooks Curry"
You can rewrite the above using the function application operator that is right-associative:
Prelude> reverse $ (++) "yrruC " $ unwords ["skoorB", "lleksaH"] "Haskell Brooks Curry"
You can also use the dot notation to make it even more readable, but the final argument needs to be evaluated first; hence, you need to use the function application operator for it:
*Main> reverse . (++) "yrruC " . unwords $ ["skoorB", "lleksaH"] "Haskell Brooks Curry"June 22, 2015 09:30 PM
Hi all, This will hopefully be a short read as how Indian communities specially product-based communities are opaque in functioning. I have been re-reading a book called ‘Microtrends‘ I bought few years ago. The book starts with a bow to another best-seller sold several years ago called Megatrends . I haven’t read the former though […]June 22, 2015 08:31 AM
India is home to billion dollar IT industry, numerous e-Governance projects, world’s largest bio metric database, and many tech driven services. The single major problem with all these technological projects at national and state levels is the danger of theft and fraud. Government of India (GoI) did realize this, and as they do with all services, introduced a policy called National Cyber Security Policy 2013.
Well, the story ends with the formation of policy, 2 years after the policy was drafted, there is no sign of National Cyber Coordination Centre (NCCC), and National Critical Information Infrastructure Protection Centre (NCIIPC). Both these agencies were supposed to take care of national IT infrastructure, mainly falling under GoI.
What’s The Problem?
Currently, as per my understanding there is only one national level cyber alert team called Indian Computer Emergency Response Team (CERT-In). They are mainly responsible for capturing and spreading information related to cyber security threats, and they have been doing an excellent job. The major problem with CERT-In is that they are depended on CERT teams of advanced countries. What they need is a better way to tackle cyber security threats, which may put public and private IT infrastructure to risk. For this reason, GoI came up with new cyber security policy.
Under the new policy, NCCC and NCIIPC will be formed as separate agencies, meaning they won’t be attached to CERT-In. When it comes to forming a separate national agencies in India, it takes really long to get hold of things, and similar issues seems to have happened with these two new agencies. And, the more time it takes to put these agencies to work, the riskier our national IT infrastructure becomes. With cyber surveillance at its peak, national documents being leaked all over world, and millions of Indians coming online, it has become the basic need to have these two agencies in place to tackle any cyber threat situation. The next war won’t be fought between forces, but between cyber war teams.
What’s The Solution?
The best case is to have these two agencies under or with CERT. This way CERT itself will get a major infrastructure upgrade, and having years of experience would also come handy. With new agencies doing similar task, and setting up new teams with new tech skills, it becomes a long and tedious process.
GoI still has time to get this done other way, considering that there will be no conflict of interest. Also, with Digital India, and many other technological projects like Aadhaar taking shape, GoI should implement the policy as soon as possible, before they get tangled in cyber warfare.June 22, 2015 12:06 AM
* વેલકમ ટુ વેનિસ. જોકે ત્રણ-ચાર દિવસથી ઘરની બહાર નીકળ્યા નથી (સિવાય કે જરુરી શોપિંગ કરવા) એટલે અમને વેનિસ વાતાવરણનો લ્હાવો મળ્યો નથી. વરસાદ હોવાં છતાંય ડોમિનોઝ સમય પર પીઝાની ડિલિવરી કરે છે, તે વાત જાણી આનંદ થયો પણ દર વખતની જેમ મિડિયમ પીઝા અમને ભારે પડ્યો (થેન્ક્સ ટુ, કેળાં, જેથી બીજા દિવસે રાહત થઇ).
કવિનને પણ બે દિવસ સ્કૂલમાં રજા રહી. હવે આજે રવિવાર એટલે પણ રજા એટલે અમે પેલાં ડાયનોસોર્સની મુલાકાત લેવાનું નક્કી કર્યું છે.
દોડવાનું-સાયકલિંગ સદંતર બંધ છે. છેલ્લે ગયાં બુધવારે ‘નાગલે’ ટ્રેલ પર ગયેલાં. મસ્ત અજાણી જગ્યા. સાયકલિંગ માટે રિસ્કી (વિથઆઉટ વ્હિસ્કી). એના માટે MTB ટાયર્સ જોઇએ. સાયકલ પર માથેરાનનો પ્રવાસ પણ વરસાદને કારણે પડતો મૂકાયો અને ૨૦૦ કિમી BRM પણ પડતી મૂકાઇ (જોકે લોકો ગયા અને ૧૨૦૦ કિમી આ લખાય છે ત્યારે પણ સાયકલિંગ કરતાં હશે. મારી હજી એટલી હિંમત અને સાયકલની તાકાત પણ નથી).
અને હા, હેપ્પી ફાધર્સ ડે. અને રવિવાર એટલે દાઢી કરવાનો દિવસ પણ ખરો :D
June 21, 2015 04:28 AM
E-commerce was supposed to simplify things, but in reality it is getting more complicated.
While purchasing online is getting easy, however making payments is painful.
First these sites give you a dozen option to make payments, for example they will say if you pay using a third-party wallet you would get 2% off, but if you use another wallet you would get 5% off and a third wallet will give you 7%!!
Now you have to first go and register in these wallets, if the wallet was popular, why would they offer discounts? They are offering the discounts to capture customers, hence you have to register with them first. With leading banks such as ICICI and HDFC jumping onto the wallet business, I think this will get more complicated.
You end up spending time registering with different wallets. After registering, they will still ask you for your credit card credentials.
In case you are already registered, they ask for you for login password at-least.
If that’s not enough, the credit card company will again ask for you a password to compete the transaction!
I am tired now of e-commerce, so I just choose cash on delivery but that’s not available every-time.June 19, 2015 09:12 AM
A lot has changed since the last blog post (more than three years). I was happily running a successful business around Videocache till Google decided to push HTTPS really hard and enforced SSL even for video content. That rendered Videocache completely useless as YouTube video caching was the unique selling point. Though people are still using it for other websites (whatever supported and not HTTPS yet), I personally didn’t find it good enough for selling. To add to the trouble, Mozilla and friends announced that there will be free certs for everyone. Now, that took away whatever motivation was left to keep working on Videocache. I decided to open source Videocache and the source is now available on GitHub. If you have better ideas or you are looking forward to make things work by forging certs etc, fork it and give it a shot.
In 2015, if you are web developer, you must know how APIs work and you should be able to consume them. So, to learn to expose APIs and version them properly, I fired a small project Pixomatix. Being a Rails developer, you really get obsessed with it and try to implement everything using Rails. Even when you want an API with 2-3 endpoints, you tend to make the horrible mistake of doing it in Rails. This kept bugging me and a few weeks later, I decided to freshen up my Sinatra memories. But working with Sinatra is not all that easy especially if you are used to all the niceties of Rails. Dug up my attempt of implementing Videocache in Ruby, and extracted few tasks and configurations I had automated long time ago. Ended up working a lot more on it and packages into a template app with almost all the essential stuff. Though I need to document it a little more, the app has got everything needed to expose a versioned API via Sinatra.
On the other hand, I tried to use devise gem to authentication for Pixomatix. It was all good for integration with standard web apps and APIs but it sort of failed me when I tried to make the API versioned. Devise turned out to be black-hole when I tried to dig deeper to make things work. I tried a few other gems which supported token authentication but they were also no good for versioning. Generally, you may not need to version the authentication part of your API, but what if you do! Since, this was just a learning exercise, I was hell bent on implementing this. So, I just reinvented the wheel and coded basic authentication (including token authentication) for the API.
That’s it for this post. I am looking forward to post regularly on the new stuff I learn.June 18, 2015 02:17 PM
One of the drives to Cloud is that it is suppose to be green, but is Amazon Web Services green itself ?
Amazon Web Services has been under fire in recent weeks from a group of activist customers who are calling for the company to be more transparent in its usage of renewable energy.
In response, rather than divulge additional details about the source of power for its massive cloud infrastructure, the company has argued that using the cloud is much more energy efficient than customers powering their own data center operations.
But the whole discussion has raised the question: How green is the cloud?June 18, 2015 11:31 AM
I am happy to announce that my book on Docker, Docker Cookbook (dockercookbook.github.io) got published last week. I got introduced to the publisher through a tech friend of mine and I am thankful to him for sharing this opportunity with me. I started working on the book almost a year back.
I was very new to Docker at that time but was co-organizing Docker meetup in Bangalore, India and learning from Docker community. The book covers concepts, managing Docker and images, Network and Data management for containers, Docker use-cases, Orchestration and Hosting platforms, Docker Performance and Security. I tried to put whatever I learned in last year or so. There are many more topics in wish-list. Hopefully I get the chance to cover them in second edition.. :).
I would like thank all the reviewers of the book, Scott Collier, Allan Espinosa, Julien Duponchelle and Vishnu Gopal for giving their valuable time to review the content, giving suggestions and finding my mistakes.
It had been more work than expected. I spent many weekends, nights to work on it. There were few times when it felt that I won’t be able to pull it off but thankfully I did. I want to thank my family, friends and co-workers at office who supported me with this project. It was great fun to work on the book. During the process I learnt a lot and Docker community had been great help.June 17, 2015 04:42 PM
Though the process of installing Sublime Text 3 on Fedora 21 is not difficult at all, the articles available online doesn't have the correct steps most of the time. Since I had to waste a decent amount of time to find the right script and install Sublime 3 on my machine, I thought I would document the steps, saving someone else's time.
The steps for installing Sublime Text 3 on Fedora 21 are:
Step 1: For Linux x64:
wget -O install-sublime.sh https://gist.github.com/xtranophilist/5932634/raw/sublime-text-3-x64.sh && sudo sh install-sublime.sh; rm -rf install-sublime.sh
 For Linux x32:
wget -O install-sublime.sh https://gist.github.com/xtranophilist/5932634/raw/sublime-text-3-x32.sh && sudo sh install-sublime.sh; rm -rf install-sublime.sh
The content of these scripts can be found here : https://gist.github.com/xtranophilist/5932634
su -c "sh install-sublime-text.sh"June 17, 2015 09:43 AM
Since last one month, I started logging the websites I visit and use, mostly those which require user to login. To my surprise I have account at over 50+ different websites. The number may be much more, considering I wasn’t able to recall all those websites where I created account just because that was the only way to get in, and later on never used it. This may be the case with many internet users.
What’s The Problem?
Well, the problem is that 90% of these 50+ websites I visit, don’t have SSL, and some of these send plain text password reset, or email the password itself. Showcasing there inner genius in handling user sensitive data. I have taken care not to repeat the mistake of using dump passwords, but that doesn’t help much, as intruders can get in, and hit these websites hard, and many of these don’t care much about encryption, mostly because they don’t have expertise in it, or may be it cost a lot to hire someone to do it. There should be a way to handle the user sensitive data on websites that don’t spend much effort in doing their bit.
Do You Have A Solution?
The first solution I see is to delete the account, but the problem here is, many of websites I/we log into don’t have the option of “delete/wipe”. If you stretch a lot, websites may provide you with deactivation of account, which again doesn’t help. Ultimately you end up being tied with a particular website, which you may never use again, and the worse happens when someone hacks these. If you are wondering why will any one care about websites that most likely doesn’t get much visitors, then you are wrong. Such websites are much more vulnerable as they can be easy targets, and when you extend such intrusion to many other similar websites, you get a very large pool of user data. So, please give me that delete button.
The second solution is to make use of Auth APIs. Google, Facebook are the two most popular, and widely used websites, let them take care of logging in and out of the accounts. If a user removes app authentication for logins, also remove/wipe the data automatically. This way you don’t get into the hassle of managing the user account creation and maintenance activities, and may be you tap into the social sphere by using such Auth APIs. This isn’t a straightforward solution, but doable.
The third solution would be to imbibe encryption by default, both on the client and server side. I am not sure if this is the case in today’s databases and other back end tools. But if software has a functionality that by default embeds encryption, then at least 99% of the user data is safe. Getting SSL is costly, and not many opt for that, but if open source projects like WordPress can find a way to develop websites with encryption embedded everywhere, I think that should help.
The fourth solution is the simplest, don’t open account if you aren’t able to establish trust on a particular website. Look for SSL, and if you are an experienced internet users, you will get a hint whether to create account with the website or not. Also, limit the urge to use every website you get hold of.June 17, 2015 07:15 AM
The latest Kilo release of the OpenStack software, made available Thursday, sports new identity (ID) federation capability that, in theory, will let a customer in California use her local OpenStack cloud for everyday work, but if the load spikes, allocate jobs to other OpenStack clouds either locally or far, far away.
“With Kilo, for the first time, you can log in on one dashboard and deploy across multiple clouds from many vendors worldwide,” Mark Collier, COO of the OpenStack Foundation, said in an interview.June 17, 2015 03:11 AM
Fedora Users and Developers Conference (FUDCon) is the annual conference for people interested in Fedora in any way. For Asia Pacific region, this year the host will be Pune from 26 June to 28 June. Pune has been the FUDCon host 4 years ago, in 2011. The team is experienced in organizing such an event and consist of a lot of long time Fedora contributors.
As always, FUDCon will present several interesting talks around Fedora and Fedora ecosystem. We have a lot of topics from development, engineering, translation, quality engineering and documentation area. Dennis Gilmore, Harish Pillay and Jiri Eishchmann will be giving keynotes. I am really looking forward to these sessions and various other talks. Along with regular talks and workshops, this time, FUDCon will have three tracks, one for each day, on distributed storage, OpenStack and containers. Since Neependra will be away for Red Hat Summit, Lalatendu and I am going to conduct the container track. We have some excellent speakers lined up for the track. Idea is to create a story which even a beginner can follow. So we’ll start with an introduction talk and then conduct workshops on Docker and Atomic. We’ll eventually dig deeper towards Kubernetes and OpenShift. Here is the complete outline for the container track:
- Introduction to Docker and Project Atomic
- Docker Basic Workshop
- Fedora Atomic Workshop
- Hands-On Kubernetes
- Orchestration of Docker containers using Openshift v3
We have an entire day dedicated to container technologies. This presents a very unique opprtunity to learn from the people who are actually using containers and are contributing to several upstream projects. So join us during FUDCon APAC in Pune on 28 for the container track.June 16, 2015 09:29 AM
Dell commissioned Greyhound Research to understand PC usage in India.
The ‘The PC Users Trends of Emerging India’ survey polled 6000 citizens from 40 cities from Tier 1 to Tier 4, across five user groups broadly defined by age and sociological factors like life aspirations and purchasing capacity.
According to a recent study by MAIT and KPMG, India’s PC penetration is estimated to be just 9 percent, lower than neighbouring countries like Sri Lanka, which stands at 12 percent, while China has 50 percent. The traditional desktop PC market is expected to grow at 2 percent, while the market for notebooks is expected to grow at 9 percent, according a Gartner report published in April.June 16, 2015 06:26 AM
For many years now I had maintained a curated list of upcoming events on this blog. This was in response to problem of knowing what events are happening in South East Asia. Later I expanded the list to include events in Asia and Africa. The reason for this closing is due to problems with Lanyrd.com service that powered this events page. Perhaps one day I'll restart my curation of events when I find a suitable alternative service.June 13, 2015 07:11 AM
Is it related to Android? No! To Java? No! To GNOME then? No, not directly. Did I spin up a company? It could have been possible, but no, it's neither that!
So what am I upto?
It is great pleasure to announce that I now work for IBM, India.
I signed the joining letter on the 12th of May, 2015. Thereafter, I have been busy setting up my work environment on my IBM provided Lenovo ThinkPad and acquainting to the new(not very) city of Bangalore. I stick to Ubuntu for my development Linux distribution there so that I don't waste time on the learning curve. I installed Ubuntu with a bunch of software needed for my official work.
I work in the Linux Technology Centre (IBM-LTC), a part of IBM India Systems and Development Labs and my primary focus would be with the RAS (Reliability, Availability and Serviceability) team on OPAL (OpenPower Abstraction Layer) and related projects.
My first assignment is to build and install the upstream kernel master and boot-test and configure that using a Virtual machine. Sounds exciting, huh? :D
So, I installed my first Ubuntu guest virtual machine on top of my host Ubuntu following these steps and am getting comfortable with using the Virtual Machine Manager. I also had to enable BIOS settings for native KVM acceleration (Virtualization Technology and VT-d feature) for the VM to run at a usable speed. Then I followed simple kernel build steps (which I reserve for some other blog-post) and after some updates to grub configuration file, I could explicitly boot into a freshly built upstream kernel 14.1.0-rc5+.
Though my development machine is an Intel x86 machine but most of my patches are going to be tested on Power 8 machines eg. for enabling some functionality on PPC machines or checking some software compatiblity for PPC architecture, so if you don't have access to those machines, you might have to believe what I have to say.
I will describe my kernel fiddles and contributions to other open source tools/utilities in the coming posts so stay tuned! I'll mostly post patches with the email id: chandni[AT_SPAMFREE]linux.vnet.ibm.com
June 12, 2015 03:41 PM
आो मीलो (clap clap clap)
शीलम शालो (clap clap clap)
कच्चा धागा (clap clap clap)
रेस लगा लो (clap clap clap)
आो मीलो शीलम शालो कच्चा धागा रेस लगा लो
दस पत्ते तोड़े
एक पत्ता कच्चा
हिरन का बच्चा
हिरन गया पानी में
पकड़ा उस्की नानी ने
नानी गयी लंडन
वहां से लाइ कंगन
कंगन गया टूट (clap)
नानी गयी रूठ (clap)
(और भी तेज़ी से)
नानी को मनाएंगे
रस मालाइ खाएंगे
रस मालाइ अच्छी
हमने खाइ मच्छी
मच्छी में निकला कांटा
मम्मी ने मारा चांटा
चांटा लगा ज़ोर से
हमने खाए समोसे
समोसे बढे अच्छे
नानाजी नमसते!June 12, 2015 12:17 PM
Chinese e-commerce giant Alibaba is upping its investment in cloud computing in the United States, making it more of a competitor to Amazon, Google, and Microsoft than ever before.
Alibaba’s cloud division, Aliyun, has signed a series of new partnerships with the likes of Intel and data center company Equinix to localize its cloud offerings without having to build its own new data centers, CNBC’s Arjun Kharpal reports.
The slew of partnerships, which Alibaba is calling its Marketplace Alliance Program, focuses on expanding its cloud services globally, not just the US. Besides Equinix and Intel, it also signed a deal with Singtel in Singapore.June 10, 2015 09:06 AM
* આજે ફેસબુક પર સમાચાર મળ્યાં કે મારી શાળાના શિક્ષક પિનાકિનભાઈ જાની હવે રહ્યાં નથી. RIP, સર!
* તેમની પાસે હું એક જ વર્ષ (પાંચમાં) ધોરણમાં ભણ્યો. તેઓ સમાજશાસ્ત્ર (કે સમાજ વિદ્યા કે ટૂંકમાં સ.શા.) નો વિષય ભણાવતા હતા. સાથે-સાથે બોય સ્કાઉટ પણ. અને સાથે-સાથ એકદમ કડક પણ. તેમની ભણાવવાની રીત પણ સામાન્ય શિક્ષકો કરતાં અનોખી હતી. એકદમ મજા આવે. મોટાભાગે તેમનાં કડક સ્વભાવને કારણે તોફાની વિદ્યાર્થીઓમાં અને ચાંપલા વિદ્યાર્થીઓમાં તેઓ અળખામણાં બનેલા.
* ક્યારેક એવું થાય કે આપણે કોઇ વસ્તુ કે વ્યક્તિ ગુમાવીએ ત્યારે તેની મહત્તા સમજાય. આઠમાં-નવમાં અને પછી કોલેજમાં આવ્યો ત્યારે ખબર પડીકે પિનાકિનભાઇ તેમનાં સમય કરતાં આગળ હતા.
June 10, 2015 05:15 AM
This post would be about a few bugs that I was able to get help and get them fixed, and some which are in process and some which may take a long time to resolve. I probably might have mentioned it quite a few times, I moved to GNU/Linux because at the time I was […]June 07, 2015 08:05 PM
JAGSOFT SOLUTIONS at velachery is looking for a Full time or Part time
Interested people can send their details to Jagsoftsolutions@gmail.com
No. 28/1, Nagendra Nagar,
Opp. Phoenix Market City,
Velachery Main Road,
Velachery, Chennai – 600 042.
9884920666 / 9884930666
See more at: http://jagsoftsolutions.com/
June 03, 2015 11:52 PM
I have a AWS instance with Ubuntu installed.
When I add a new user, or try to change the password, I got the following error.
passwd: Module is unknown
passwd: password unchanged
Let us check the auth.log for any issues.
root@ip-172-31-9-242:~# tail -n 2 /var/log/auth.log
Jun 3 14:06:01 ip-172-31-9-242 CRON: pam_unix(cron:session): session closed for user root
Jun 3 14:08:01 ip-172-31-9-242 CRON: PAM unable to dlopen(pam_cracklib.so): /lib/security/pam_cracklib.so: cannot open shared object file: No such file or directory
It means that it misses the pam_cracklib. Let us search for it.
root@ip-172-31-9-242:~# apt-cache search pam | grep crack
libpam-cracklib – PAM module to enable cracklib support
Good. Let us try installing it.
root@ip-172-31-9-242:~# apt-get install libpam-cracklib
Let us try now to change the password.
Great. It works now.
Lesson : Look for the log files, before googling.
June 03, 2015 10:03 PM
Today we had first ever Fedora Globalization (g11n) meeting. Members from 4 groups Internationalization, Localization, Fedora language testing group and Zanata participated. To get background regarding this meeting see g11n proposal.
Due to less time we decided to discuss FLTG and UTRRS topics in next meeting. We are going to have our next meeting on 17th June 2015 same time @ 04:30 UTC.
Zanata feedback surveyWe are using Zanata as a official translation platform in Fedora. We need to do survey to take feedback from both translators and package maintainers. It will help us to understand need and missing feature from these different user groups. We agreed to do survey around end of august 2015.By the next meeting we will have further updates on how we are planning to do it.ACTION: mkim and apeter followup with lukebrooker.
L10N sprints based on F23We discussed number of things under this topic. i.e. Minimum percentage of translations to declare particular language is supported or not. Discussed on language coordinator status.ACTION: apeter to draft sprint proposal and send on mailing list for discussion.
Translation deadline around BetaThis was most debated topic. There is definitely need to look into this issue. Since after string freeze number of packages get updated. Its good to have some realistic and working deadline for Translation freeze. We got lots of suggestions.ACTION: noriko to prepare draft for extending Translation deadline and send to mailing list.
G11N Infrastructure (IRC, Wiki and Ticket etc.)We agreed to create g11n mailing list. May be we can simply add other list into g11n mailing list. Traffic on g11n mailing list will be not that high though. Also soon we will have #fedora-g11n channel.ACTION: pravins to create g11n mailing list.
G11N FAD - proposalQuickly we discussed idea about Globalization FAD. Requested everyone interested in attending FAD to add there name. We need to further work on budget before pushing this to Fedora council.
Next meetingWith the lots of good discussions we dropped our earlier plan of meeting once in each month. Decided to meet in 2 weeks.
June 03, 2015 01:00 PM
In the last 2 blog posts we discussed how to configure Midokura and Eucalyptus on a single system install and stand up a AWS like private cloud with AWS VPC support. In case you missed reading those blog posts here are the links:
In this blog post we will simply discuss a very easy way to get this up and running super quickly.
Using the official eucalyptus cookbook, the midokura-cookbook for Eucalyptus and the Eucalyptus faststart shell script we achieved automation of the entire setup that includes setting up and configuring midokura (including its dependencies) and Eucalyptus configured/integrated with this midokura install.
The upstream routing is configured with static routes (no BGP) in order to keep it simple and all-in-one-box approach.
Important Note - For private network we have taken a safe default subnet i.e.
172.19.0.0/30and we assume this to be completely un-useable in the environment.
Now finally in order to get going you could simple run the below shell command on a CentOS 6.6 x86_64 minimal install:
bash <(curl -s https://raw.githubusercontent.com/jeevanullas/eucalyptus-cookbook/mido-one/faststart/cloud-in-a-box.sh) -u https://s3.amazonaws.com/jeevanullas-files/eucalyptus-cookbooks-4.1.1.tgz
Note that we pass a different cookbooks URL path to the script at the moment as our changes to the eucalyptus and midokura cookbook are not implemented in the upstream.
At the end of the install (if you have chosen to install extra services) the AWS like private cloud with AWS VPC support should be up and running with an instance in default VPC by no more than 20 minutes straight.
NOTE: This is an un-supported configuration and AWS VPC support is also currently under tech-preview. You might find rough edges with the implementation, only if you are really interested in giving it a try do it. In case of trouble reach out to the euca-users google group.
We hope you like this automation and would give it a test run and share your feedback on the implementation with us.June 02, 2015 09:43 AM
This kitten diaries update almost did not make it. For several days the kittens were not to be seen. Billu had kept them well hidden and did not even allow me near them. After about a week of no signs of kittens being around, we gave up and thought that they succumbed to nature’s ills. … Continue reading Kitten Diaries Part -3 -The playful kittensJune 02, 2015 07:38 AM
* લોંગ ટાઇમ, નો અપડેટ્સ!
* હવે ફરી પાછું રનિંગ પર ધ્યાન કેન્દ્રિત કરવામાં આવ્યું છે. કારણ? સિસ્કિયુ!
* એટલે કે, આવતા મહિને ફરી પાછી લાંબી સફર. બોરિંગ. પણ, અંતે નવી જગ્યાઓ જોવા મળે અને નવાં લોકોને મળવાનું થાય એ વાતનો આનંદ.
* વેકેશન હવે ખતમ થવાં આવ્યું છે (એટલે કે કવિનનું) એટલે ફરી પાછી થોડાં દિવસ દોડા-દોડી રહેશે.
* આજનો પીજે:
એક ભાઇ: બીજું શું નવિનમાં?
હું: આ નવિન કોણ છે? :)
June 02, 2015 03:49 AM
Often byobu reattaches to old session where the tmux windows are smaller than the terminal size
To fit the tmux window to terminal size run
Ctrl+a :attach -d
June 01, 2015 05:58 PM
Rootconf is a devops centric conference which takes places in Bangalore annually. Rootconf attracts very tech-savvy, niche crowd, something that is not easily avaialble elsewhere in India. This year there were at least 300 people present at MLR Convention Center and I had the opprtunity to talk about two topics, Project Atomic and a security breach that happened at BrowserStack. The talks which were planned for 40 minutes were scaled down to 15 minutes due to time contraints. It wasn’t the ideal situation but that was all I got. Both the talks went smoothly, however the demo was cut short. I enjoyed it nonetheless and I hope it went good for the attendees as well.
I really enjoyed the SaltStack and Icinga introduction talks. Akbar’s talk on Kafka was very interesting. That will possibly the next tool I’ll add to BrowserStack arsenal. Another interesting find was Inframer by Saurabh Hirani. I’ll probably work on that too but it is not super critical at the moment. Introduction to Rancher OS by Shanker Balan was great. It looked very much like Project Atomic from functionality standpoint. There were many more great sessions.
Rootconf had Birds of a Feather (BoF) sessions this time. That was something I liked a lot, a simple discussion around the most important and interesting topics in the industry. I helped conducting Infrastructure BoF along with Mike Place. Mike is a SaltStack developer and a very experienced fellow in automation and configuration management. Most of all, he is a strong believer of “containers are not a silver bullet solution for everything”. That is something which we should all appreciate and learn. I also met up with Mark Lavi, who works for Idea Device. Mark was one of the developers at Netscape Navigator and had all sorts of stories around the old internet.
Gaurav ran another interesting and much needed BoF on coping with burnout. I could only attend a part of it because my talk was also scheduled in the same timeframe but I have some very interesting takeaways from it. One of which I am going to implement right away, never carry forward your vacation days to next year.
I had a great time at Rootconf 2015 and I am already looking forward for Rootconf 2016.
Watch the entire Rootconf 2015 videos here.May 29, 2015 06:31 AM
બે દિવસ પહેલાં ઉપરોક્ત સમાચાર વાંચ્યા પછી મને થયું કે આવા પણ સમાચાર ગુ.સ. આપી શકે છે. દા.ત.
તમિલ વેપારીની બેદરકારીથી હૈદરાબાદી ગ્રાહકે ગુજરાતી માલિકીના રીલાયન્સ સ્ટોરમાંથી પંજાબી લોકોના પરોઠાં ખાઇ લીધાં. ત્યારબાદ તેણે તેલંગાના ફૂડ ઇન્સ્પેક્ટરને ફરિયાદ કરતાં મરાઠી-કોંકણી-બંગાળી લોકોની સાંઠ-ગાંઠ ખૂલ્લી પડી. આ જોઇને ઓરિયા લોકોએ આ પ્રકારના કૌભાંડોમાં એમ.પી. અને યુ.પીના લોકોનો હાથ હોઇ શકે એવી માંગણી કરી.
મેરા ભારત મહાન. મેરા ગુ.સ. સબ સે મહાન!
May 26, 2015 04:16 PM
I used LibreOffice to open Word doc and to save as HTML.
Till LibreOffice 4.1, the images are extracted and stored separately along with HTML file.
But, after LibreOffice 4.2, they moved to base64 type of encoding of images, so that images are embedded into HTML files. We can not separate images from HTML files.
This was so annoying and many people are reporting this as a bug here.
But, This seems not to be fixed.
So, I installed LibreOffice 4.1 in /opt just to use the old feature of storing images separately.
Just now, found another utility, unrtf to do the same.
To install it in ubuntu/debian
sudo apt-get install unrtf
If you get an word doc with images, save it as a Rich Text File using libreoffice writer.
test.doc -> test.rtf
unrtf test.rtf > test.html
This gives a nice HTML file and images separately.
Thanks for the GNU team for the nice utility.
I can get rid of old LibreOffice 4.1 now.
May 25, 2015 11:41 PM