Congratulations!

[Valid RSS] This is a valid RSS feed.

Recommendations

This feed is valid, but interoperability with the widest range of feed readers could be improved by implementing the following recommendations.

Source: http://pooteeweet.org/rss.xml

  1. <?xml version="1.0" encoding="iso-8859-1"?>
  2. <rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
  3.    <channel>
  4.        <title>Poo-tee-weet</title>
  5.        <link>http://pooteeweet.org</link>
  6.        <description>Poo-tee-weet: ramblings on PHP, SQL, the web, politics, ultimate frisbee and what else is on in my life</description>
  7.        <dc:language>en</dc:language>
  8.        <generator>WebBuilder2</generator>
  9.        <managingEditor>[email protected] (Lukas Kahwe Smith)</managingEditor>
  10.        <webMaster>[email protected] (Lukas Kahwe Smith)</webMaster>
  11.        <ttl>1440</ttl>
  12.        <item>
  13.            <title>Understanding what is wrong with meritocracy (part one)</title>
  14.            <link>http://pooteeweet.org/blog/0/2272</link>
  15.            <guid>http://pooteeweet.org/blog/0/2272</guid>
  16.            <category>general</category>
  17.            <description>Being fair is very important to me. I have however not really devoted my life to determining what the definition of fairness is, I mostly rely on my gut feeling here, like I assume most people do. In that sense I also accept that fairness, how most people apply it, is based on social conventions and personal experience. Both of which are not necessarily &amp;quot;just&amp;quot; in that social conventions often simply keep the ignorance of the past alive and personal experiences are essentially a social experiment with insufficient data. However I also assume that not leveraging social conventions or personal experience would make my daily life impossible as I would be overwhelmed with all the decision making. But if we choose to not challenge us in our daily life out of convenience, we should at least review our decision framework at regular times, especially by actively trying to expand our personal experience with the experiences of others.
  18.  
  19. </description>
  20.            <content:encoded>&lt;p&gt;Being fair is very important to me. I have however not really devoted my life to determining what the definition of fairness is, I mostly rely on my gut feeling here, like I assume most people do. In that sense I also accept that fairness, how most people apply it, is based on social conventions and personal experience. Both of which are not necessarily &amp;quot;just&amp;quot; in that social conventions often simply keep the ignorance of the past alive and personal experiences are essentially a social experiment with insufficient data. However I also assume that not leveraging social conventions or personal experience would make my daily life impossible as I would be overwhelmed with all the decision making. But if we choose to not challenge us in our daily life out of convenience, we should at least review our decision framework at regular times, especially by actively trying to expand our personal experience with the experiences of others.&lt;/p&gt;
  21.  
  22. &lt;p&gt;Open source software development is without a doubt one of my big passions. What I enjoy most about it are three things: the intellectual exchanges, the intercultural collaboration and the empowerment it provides for people around the world. Especially in regards to &amp;quot;intercultural&amp;quot; the demographics in the open source world, especially in the western world, obviously do not represent the demographics in the real world and that of course diminishes on of the three aspects that attracted me to open source to begin with. Even if open source has enabled me to travel around the world both physically and virtually, I believe that the status quo is not ideal. In fact I also see this as unfair. Meaning its not just that it diminishes my enjoyment of participating in the open source world, I acknowledge that its an injustice that it seems that others are not getting an equal chance to at least enjoy the other positive aspects of open source development. So I would like to see this changed.&lt;/p&gt;
  23.  
  24. &lt;p&gt;Within my twitter timeline I see quite a few people linking to articles such as &lt;a href=&quot;http://www.ashedryden.com/blog/the-ethics-of-unpaid-labor-and-the-oss-community&quot;&gt;&amp;quot;The Ethics of Unpaid Labor and the OSS Community&amp;quot;&lt;/a&gt; and &lt;a href=&quot;https://modelviewculture.com/pieces/the-dehumanizing-myth-of-the-meritocracy&quot;&gt;&amp;quot;The Dehumanizing Myth of the Meritocracy&amp;quot;&lt;/a&gt; that argue that meritocracy, the predominant decision structure in open source, is inherently flawed for two reasons: 1) merit isn&apos;t sufficiently objectively assessed to give marginalized groups equal opportunity and 2) that it perpetuates elitism and more specifically rationality over humanity. Now to me meritocracy isn&apos;t key to my enjoyment of open source directly but I have always held it as one of the key contributors to both the intellectual as well as the cultural exchange aspects. The statements in the linked articles put the fact if open source has truly achieved a cultural exchange into question and to some extend questions the motivation for the intellectual exchange as potentially wrong in and of itself. That is some pretty fundamental criticism of one of my biggest passions. Meaning it is very important to be able to address this and take necessary actions based on the expanded understanding I will hopefully gain.&lt;/p&gt;
  25.  
  26. &lt;p&gt;I will try to post about my findings. I assume that as I dive into this, I will say wrong things or more importantly I will say things where I have simply not yet understood the entire picture yet. I welcome criticism but I hope that people will refrain from attacking my character and instead address my particular statements. I have come to realize that the shortness of twitter messages often times make such differentiation impossible and as such discussions especially about this topic have gone of course from the original intention, i.e. they turned into insults or people pit opinions against each other needlessly resulting in defensiveness (which often results in counter attacks that spiral downward) rather than in an environment of safety required for a dialog about questioning fundamental aspects of ones passion. I will see if a blog serves as a better platform than twitter.&lt;/p&gt;
  27.  
  28. </content:encoded>
  29.            <pubDate>Fri, 19 Jun 2015 11:09:37 +0200</pubDate>
  30.            <dc:creator>Lukas Kahwe Smith</dc:creator>
  31.        </item>
  32.        <item>
  33.            <title>The future of PHP .. at a distance</title>
  34.            <link>http://pooteeweet.org/blog/0/2259</link>
  35.            <guid>http://pooteeweet.org/blog/0/2259</guid>
  36.            <category>general</category>
  37.            <description>Its been quite a few years since I have been last subscribed to the internals mailing list. I still hang out in one of the popular core dev IRC channels, follow quite a few on twitter. So I still manage to stay on top of what is happening more or less but not the details in the discussions. Just wanted to put this as a disclaimer. Any opinions in this blog post are opinions formed watching something at a distance and this always runs the risk of being quite wrong or more impartial.
  38.  
  39. </description>
  40.            <content:encoded>&lt;p&gt;Its been quite a few years since I have been &lt;a href=&quot;http://pooteeweet.org/blog/1753&quot;&gt;last subscribed to the internals mailing list&lt;/a&gt;. I still hang out in one of the popular core dev IRC channels, follow quite a few on twitter. So I still manage to stay on top of what is happening more or less but not the details in the discussions. Just wanted to put this as a disclaimer. Any opinions in this blog post are opinions formed watching something at a distance and this always runs the risk of being quite wrong or more impartial.&lt;/p&gt;
  41.  
  42. &lt;p&gt;To me it feels like PHP development has become much better structured. It also feels like the &lt;a href=&quot;https://wiki.php.net/rfc&quot;&gt;RFC process&lt;/a&gt; has enabled an influx of new contributors that previously simply didn&apos;t know how to get their stuff in. There were a bunch of old devs opposed to adding this documented &amp;quot;processes&amp;quot;, saying that &amp;quot;open source is about fun and processes kill the fun&amp;quot;. But to me that was always shining through that argument was that there is always an implicit process and that process is usually ideal for the current people &amp;quot;in power&amp;quot;. As such, there is nothing wrong with that, since if the current people can handle the load, why bother trying to please a theoretical new contributor? But while several core contributors, like Rasmus, Derick and Ilia, have sustained pretty significant levels of contributions, many others have drastically reduced their contributions. More over as the feature scope increases it makes a lot of sense to also grow the number of maintainers. However I guess the really active core group of contributors in most open source project I know tends to hover around 10-20. The beauty of clearer processes is that it can also help in clearer delegation, which can lead to subgroups within an open source organization that again have an inner circle of 10-20 really active people. Now from all I hear &amp;quot;discussions&amp;quot; on the internal mailing list still have a tendency to generate lots of less than helpful traffic.&lt;/p&gt;
  43.  
  44. &lt;p&gt;But everything could be turned up side down in the near future. I have been &lt;a href=&quot;http://pooteeweet.org/blog/1661&quot;&gt;critical of Facebook&apos;s initial efforts at trying to reimplement PHP&lt;/a&gt;. A change of direction towards a JIT without a code compilation requirement however have made their efforts significantly more viable. More importantly Facebook is now actively trying to &lt;a href=&quot;http://www.hhvm.com/blog/875/wow-hhvm-is-fast-too-bad-it-doesnt-run-my-code&quot;&gt;enable anyone to run their chosen PHP framework on top of HHVM&lt;/a&gt;. They are even &lt;a href=&quot;https://groups.google.com/d/msg/php-fig/iwMXyrruwvk/z_ZELhZBAU8J&quot;&gt;actively soliciting feedback from framework authors&lt;/a&gt; where they would like HHVM to go next in terms of language features. If you compare this to current PHP internals where it seems to be a &lt;a href=&quot;https://wiki.php.net/rfc/annotations&quot;&gt;never&lt;/a&gt; &lt;a href=&quot;https://wiki.php.net/rfc/propertygetsetsyntax-v1.2&quot;&gt;ending&lt;/a&gt; &lt;a href=&quot;https://wiki.php.net/rfc/engine_exceptions&quot;&gt;battle&lt;/a&gt; between mostly the older developers concerned with backwards compatibility, no doubt one of the reasons why &lt;a href=&quot;http://w3techs.com/technologies/details/pl-php/all/all&quot;&gt;PHP has been able to catch 80% of the internet&lt;/a&gt;, and framework authors asking for new language features to enable easier development. Make no mistake adopting Facebook specific syntax in frameworks will of course make that code incompatible with PHP itself. Some of this could be &amp;quot;fixed&amp;quot; by a &amp;quot;compiler&amp;quot; that transforms HHVM specific features to normal PHP code, but that would be probably more sad than ironic. Also what about windows users, still a very significant portion of the PHP user base?&lt;/p&gt;
  45.  
  46. &lt;p&gt;None the less the proposition seems tasty when wearing my framework hat. But while this is exciting, its at least as much troubling to me. Do we really want Facebook to have final say in how the language evolves? I am not even sure if Facebook really wants this responsibility. I guess other scripting languages have already had to deal with this situation with various popular reimplementations of Ruby (JRuby, IronRuby ..) and Python (Jython, IronPython ..) having scooped up large parts of their user base. Then again I assume these reimplementations have actually also helped grow or at least sustain their use bases. Facebook has hired quite a few of previous PHP core developers though I am not entirely sure how involved they are in HHVM development. But at the very least it could ensure that there is a bit of a trust relationship between PHP internals and HHVM, which is of course quite important in the open source world. &lt;a href=&quot;http://www.infoworld.com/t/php-web/believe-the-hype-php-founder-backs-facebooks-hiphop-technology-231012&quot;&gt;Rasmus also seems to be sympathetic to HHVM&lt;/a&gt;. To me a key requirement for this all to make sense is for more non Facebook employees to get involved in HHVM development. This would ensure that the project wouldn&apos;t blow up if for some reason Facebook looses interest. It would also help in making the internal decision process on HHVM more transparent.&lt;/p&gt;
  47.  
  48. &lt;p&gt;At the same time I see a huge opportunity here. If I remember correctly it was &lt;a href=&quot;http://thieso2.de&quot;&gt;Thies&lt;/a&gt; that first stated that the goal of PHP internals should focus on making it possible that all extensions could in fact be written in PHP rather than C back at some LinuxTag over a decade ago. With HHVM&apos;s JIT this now seems more feasible than ever before. I have long said that building a good API is an iterative process which requires input from many people and early adopters to battle test. However as PHP is written in C the number of contributors are limited and its not easy to get a broad range of testers to put the concepts into real world testing. PECL has tried to reduce this pain point a bit by providing infrastructure for C developers to create and distribute extensions outside of core release process. But to me its still clear that its not a sufficient solution to provide the number of eyes needed to build more complex APIs. So HHVM could help push forward a revolution in the process of adding new functionality to PHP that I have been waiting for since ages.&lt;/p&gt;
  49.  
  50. &lt;p&gt;So in conclusion there are lots of reasons to be excited about HHVM&apos;s impact on the PHP community. But we should also ensure that in the process the community does not become dependent on a commercial entity.&lt;/p&gt;
  51.  
  52. &lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt;: The performance of HHVM seems quite impressive indeed these days, since Chregu ran these &lt;a href=&quot;http://blog.liip.ch/archive/2013/10/29/hhvm-and-symfony2.html&quot;&gt;benchmarks&lt;/a&gt;, the &lt;a href=&quot;http://www.hhvm.com/blog/2393/hhvm-2-3-0-and-travis-ci&quot;&gt;HHVM team has released version 2.3&lt;/a&gt;, which supposedly reduces CPU load another 20%. &lt;a href=&quot;http://about.travis-ci.org/blog/2013-12-16-test-php-code-with-the-hiphop-vm/&quot;&gt;HHVM is now also available on Travis-CI&lt;/a&gt;.&lt;/p&gt;
  53.  
  54. &lt;p&gt;&lt;strong&gt;Update 2&lt;/strong&gt;: The article has been posted to &lt;a href=&quot;https://news.ycombinator.com/item?id=6921697&quot;&gt;hacker news&lt;/a&gt;, resulting in a surprisingly sane discussion.&lt;/p&gt;
  55.  
  56. </content:encoded>
  57.            <pubDate>Tue, 17 Dec 2013 09:55:00 +0100</pubDate>
  58.            <dc:creator>Lukas Kahwe Smith</dc:creator>
  59.        </item>
  60.        <item>
  61.            <title>API versioning in the real world</title>
  62.            <link>http://pooteeweet.org/blog/0/2248</link>
  63.            <guid>http://pooteeweet.org/blog/0/2248</guid>
  64.            <category>general</category>
  65.            <description>We here at Liip are currently building a JSON REST API for a customer. At least initially it will only be used by 3 internal projects. One of which we are building and the 2 others are build by partner companies of the customer. Now we want to define a game plan for how to deal with BC breaks in the API. The first part of the game plan is to define what we actually consider a BC break and therefore requires a new API version. We decided to basically define that only removed fields, renamed fields or existing fields who&apos;s content has changed should be a BC break. In other words adding a new field to the response should not be considered a BC break. Furthermore changes in how results are sorted are generally not to be considered a BC break as loading more data or an upgrade of the search server can always result in minor reordering. However we would consider a change in any defaults to be an API increase (f.e. changes in default sort order) or changing the default output from &amp;quot;full&amp;quot; to &amp;quot;minimal&amp;quot; to be a BC break. But I guess we would not consider changing from &amp;quot;minimal&amp;quot; to &amp;quot;full&amp;quot; as a BC break as it would just add more fields by default. That being said, for caching reasons, we try not to work with too many such defaults anyway and rather have more requirement parameters. With these definitions ideally we should only rarely have to bump the API version. But there will be the day were we will have to none the less.
  66.  
  67. </description>
  68.            <content:encoded>&lt;p&gt;We here at &lt;a href=&quot;http://liip.ch&quot;&gt;Liip&lt;/a&gt; are currently building a JSON REST API for a customer. At least initially it will only be used by 3 internal projects. One of which we are building and the 2 others are build by partner companies of the customer. Now we want to define a game plan for how to deal with BC breaks in the API. The first part of the game plan is to define what we actually consider a BC break and therefore requires a new API version. We decided to basically define that only removed fields, renamed fields or existing fields who&apos;s content has changed should be a BC break. In other words adding a new field to the response should not be considered a BC break. Furthermore changes in how results are sorted are generally not to be considered a BC break as loading more data or an upgrade of the search server can always result in minor reordering. However we would consider a change in any defaults to be an API increase (f.e. changes in default sort order) or changing the default output from &amp;quot;full&amp;quot; to &amp;quot;minimal&amp;quot; to be a BC break. But I guess we would not consider changing from &amp;quot;minimal&amp;quot; to &amp;quot;full&amp;quot; as a BC break as it would just add more fields by default. That being said, for caching reasons, we try not to work with too many such defaults anyway and rather have more requirement parameters. With these definitions ideally we should only rarely have to bump the API version. But there will be the day were we will have to none the less.&lt;/p&gt;
  69.  
  70. &lt;p&gt;First up we do not want to use the URL to version the resources for the obvious reason that this violates the concepts of REST, as this would imply that &amp;quot;/v1/foo&amp;quot; and &amp;quot;/v2/foo&amp;quot; are not the same resource. Remember the &amp;quot;RE&amp;quot; in REST stands for &amp;quot;REpresentational&amp;quot; which means we are talking about representation of state. Therefore a single URI should be used by resource as the unique identifier. Instead to get different representations we should use media types. There isn&apos;t really a universally accepted standard for how to encode version information into a media type. So far so bad. To make things worse it gets a bit iffy to define custom media types (ie. &amp;quot;application/vnd.my_api+v1.1&amp;quot;) for different versions as then you would logically also return that as the Content-Type in the respons. This in turn will make generic tools not able to pick up that the response is in fact JSON if its not just &amp;quot;application/json&amp;quot;. A convention to at least make it human understandable is to add &amp;quot;+json&amp;quot; to the custom media type and I guess clients like browsers could be made to understand this convention. Also it seems like browsers also sometimes ignore the Content-Type entirely and simply try to guess by looking at the actual content.&lt;/p&gt;
  71.  
  72. &lt;p&gt;Then again there is no standard that defines what a web application is actually supposed to do with an Accept header in the strict sense. Sure there is the &amp;quot;q&amp;quot; parameter which defines the priorities of the different media types in the Accept header. But technically it is up to the server to decide what to make of these priorities and as far a I know it can also choose to respond with any media type it wants to. Meaning a request with &amp;quot;Accept: application/vnd.my_api+json+v1.1&amp;quot; could come back with &amp;quot;Content-Type: application/json&amp;quot;. Obviously if the server does not support the requested media type (f.e. it never did or no longer does) it should return a 416 HTTP status code.&lt;/p&gt;
  73.  
  74. &lt;p&gt;However this brings us to the next issue: Should we have separate version numbers for different parts of the API? Having different versions would make sense if we will likely have releases that will touch smaller parts. But this will complicate the life of the API user, who would then need to worry about sending the proper media types for different parts of the API. But a release often approach might still make this quite sensible as long as we then keep older versions supported for a longer time. However if we have bigger releases it might be easier for everyone if we just increment the entire API if there are any BC breaks. Given that we have a small group of API users we might then even force everyone to update to the new API within a defined timeframe. This could be nice because then with every such big release we would remove potentially deprecated code for previous versions.&lt;/p&gt;
  75.  
  76. &lt;p&gt;But this brings up to the topic of caching. Caching really requires that we also include the version in the response. So returning &amp;quot;Conent-Type: application/json&amp;quot; would be a no go. It would be better if we would return the actual version of the returned structure in the response, so we are back at returning &amp;quot;Content-Type: application/vnd.my_api+json+v1.1&amp;quot;. This way we could ensure we do not return duplicates into the cache, even if parts of the API got their version number incremented without any actual change in the response. But actually it does not really help us that much to look at the response. For doing the actual cache lookups on a request to be able to determine if we have a response cached we obviously only have the request data: we would need to find a solution so that for example the reverse proxy knows that for specific parts of the API &amp;quot;application/vnd.my_api+json+v1.1&amp;quot; should actually use &amp;quot;application/vnd.my_api+json+v1.0&amp;quot; as the cache key lookup. But this raises the even bigger question, how does Varnish deal with content type negotiation in general? How will varnish figure out how to deal with more complex Accept headers like &amp;quot;Accept: application/vnd.my_api+json+v1.2; q=1, application/vnd.my_api+json+v1.1; q=0.6, application/vnd.my_api+json+v1.0; q=0.5&amp;quot;. Effectively one would need to handle the entire content type negotiation inside the reverse proxy in order to do the correct cache lookups. And as stated before, this would even need to be aware of the fact that f.e. &amp;quot;/foo&amp;quot; might already be at &amp;quot;1.2&amp;quot; while &amp;quot;/bar&amp;quot; might still be at &amp;quot;1.0&amp;quot;. I guess this issue could be handled via the same &lt;a href=&quot;http://pooteeweet.org/blog/2033&quot;&gt;trick I employed for authentication&lt;/a&gt;, ie. translating requests into HEAD requests to let the web application figure this out, but this will add overhead to every request. So at this point I am scratching my head and wondering how to proceed.&lt;/p&gt;
  77.  
  78. &lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt;: An &lt;a href=&quot;http://www.mnot.net/blog/2012/12/04/api-evolution&quot;&gt;interesting article on managing API evolution&lt;/a&gt; discussing what should trigger a version increment and how to plan ahead to prevent that.&lt;/p&gt;
  79.  
  80. &lt;p&gt;&lt;strong&gt;Update 2&lt;/strong&gt;: Another &lt;a href=&quot;http://www.infoq.com/news/2013/12/api-versioning&quot;&gt;interesting article&lt;/a&gt;, this one about the costs of different API versioning strategies.&lt;/p&gt;
  81.  
  82. </content:encoded>
  83.            <pubDate>Mon, 09 Dec 2013 09:28:13 +0100</pubDate>
  84.            <dc:creator>Lukas Kahwe Smith</dc:creator>
  85.        </item>
  86.        <item>
  87.            <title>What is next for Symfony2?</title>
  88.            <link>http://pooteeweet.org/blog/0/2239</link>
  89.            <guid>http://pooteeweet.org/blog/0/2239</guid>
  90.            <category>general</category>
  91.            <description>Or rather what is left to do for the Symfony2 community? Obviously there are some missing features, bug fixes, performance enhancements and polish to apply to various parts of our code base. In terms of features, I think the main part that could use some more love is the HttpCache. But by and large, I think we cover everything that we need to cover and we do it quite well. When looking over the 3.0 UPGRADE file I am also not seeing of anything that hurts bad enough that would be a good reason to start working on the next major version. Given that, the question becomes where to direct our attention as a community?
  92.  
  93. </description>
  94.            <content:encoded>&lt;p&gt;Or rather what is left to do for the Symfony2 community? Obviously there are some missing features, bug fixes, performance enhancements and polish to apply to various parts of our code base. In terms of features, I think the main part that could use some more love is the &lt;a href=&quot;https://github.com/symfony/symfony/pull/6213&quot;&gt;HttpCache&lt;/a&gt;. But by and large, I think we cover everything that we need to cover and we do it quite well. When looking over the &lt;a href=&quot;https://github.com/symfony/symfony/blob/master/UPGRADE-3.0.md&quot;&gt;3.0 UPGRADE&lt;/a&gt; file I am also not seeing of anything that hurts bad enough that would be a good reason to start working on the next major version. Given that, the question becomes where to direct our attention as a community?&lt;/p&gt;
  95.  
  96. &lt;p&gt;Avid readers of my blog might have noticed a theme in recent blog posts. A while ago I noted that core developers of the early days have become a lot &lt;a href=&quot;http://pooteeweet.org/blog/2204&quot;&gt;less active&lt;/a&gt;. Then I posted about the need to start working on higher level code to &lt;a href=&quot;http://pooteeweet.org/blog/2205&quot;&gt;make Symfony2 more rapid development friendly&lt;/a&gt;. Following this post I blogged about what is missing to &lt;a href=&quot;http://pooteeweet.org/blog/2221&quot;&gt;make Symfony2 truly great for building REST APIs&lt;/a&gt;. Now last evening at DrupalCamp Vienna I was asked what is left to do for the Symfony2 community and it didn&apos;t take me long to think of an answer: Bundles!&lt;/p&gt;
  97.  
  98. &lt;p&gt;We have an insane amount of contributors to the core, helping on small to complex tasks. Yet, if you look over the most popular Bundles on &lt;a href=&quot;http://knpbundles.com&quot;&gt;knpbundles.com&lt;/a&gt;, most are maintained by and large by a single person and with very few contributors. Then again, there is continuous stream of new tickets (both bugs and feature requests). As the lead maintainer of &lt;a href=&quot;https://github.com/FriendsOfSymfony/FOSRestBundle&quot;&gt;FOSRestBundle&lt;/a&gt;, I have repeatedly tried to find additional helping hands but for the most part it is still only me working on the code. &lt;a href=&quot;https://github.com/KnpLabs/KnpMenuBundle&quot;&gt;KnpMenuBundle&lt;/a&gt; and its underlying library has been stalled in development for over a year now. I guess &lt;a href=&quot;http://sonata-project.org&quot;&gt;Sonata&lt;/a&gt; does have a decently active community. So that is good news, then again Sonata has such a large scope these days, that it needs even more helping hands. This is most evident in the lack of work on the documentation.&lt;/p&gt;
  99.  
  100. &lt;p&gt;But how to redirect the resources? Obviously people work on the things that they need. Then again, some people also like to work for fame as is evident by the excitement around the &lt;a href=&quot;https://connect.sensiolabs.com&quot;&gt;sensio connect badges&lt;/a&gt; and how much pride people associate to being listed in the &lt;a href=&quot;http://symfony.com/contributors&quot;&gt;contributors page&lt;/a&gt;. We do have &lt;a href=&quot;http://symfony.com/doc/current/bundles/index.html&quot;&gt;a few Bundles listed&lt;/a&gt; in the documentation but this is more a semi random list of Bundles. Maybe one solution is to try and pick a dozen or so Bundles in the community and promote them a bit more within the community? For example by adding them to the docs and giving badges to contributors.&lt;/p&gt;
  101.  
  102. </content:encoded>
  103.            <pubDate>Sun, 24 Nov 2013 12:23:30 +0100</pubDate>
  104.            <dc:creator>Lukas Kahwe Smith</dc:creator>
  105.        </item>
  106.        <item>
  107.            <title>__toString() or not __toString()?</title>
  108.            <link>http://pooteeweet.org/blog/0/2231</link>
  109.            <guid>http://pooteeweet.org/blog/0/2231</guid>
  110.            <category>general</category>
  111.            <description>The __toString() belongs to the family of methods and functions called &amp;quot;magic functions&amp;quot;. They are magic because for the most part they do not get called explicitly but rather intercept operations. Unfortunately there are limits to its magic, specifically the only &amp;quot;context&amp;quot; the method is aware of is its design contract: to return a string. But its not clear what purpose that is. Should this be for some internal debugging or logging purposes? There one would be most interested in internal identifiers and object state. Is it for some frontend UI where the user will most likely be interested in some textual identifier that isn&apos;t too long as to not clutter the UI. There in lies the dilemma in the magic, while useful there is no way to ensure that the given context is passed on.
  112.  
  113. </description>
  114.            <content:encoded>&lt;p&gt;The &lt;a href=&quot;http://www.php.net/manual/en/language.oop5.magic.php#object.tostring&quot;&gt;__toString()&lt;/a&gt; belongs to the family of methods and functions called &amp;quot;magic functions&amp;quot;. They are magic because for the most part they do not get called explicitly but rather intercept operations. Unfortunately there are limits to its magic, specifically the only &amp;quot;context&amp;quot; the method is aware of is its design contract: to return a string. But its not clear what purpose that is. Should this be for some internal debugging or logging purposes? There one would be most interested in internal identifiers and object state. Is it for some frontend UI where the user will most likely be interested in some textual identifier that isn&apos;t too long as to not clutter the UI. There in lies the dilemma in the magic, while useful there is no way to ensure that the given context is passed on.&lt;/p&gt;
  115.  
  116. &lt;p&gt;A very extreme solution would be to simply first set a context on the object before using __toString() but at that point one could just as well call another method. As such I think this method can reasonably only be used for one of the two purposes within a given code base. Now the question is which of the two should it be?&lt;/p&gt;
  117.  
  118. &lt;p&gt;One could make the argument that output for UI purposes will mostly be done inside a template. There one would prefer to limit logic as much as possible. Furthermore it would be great to ensure consistent output across the entire UI and it seems quite useful to leverage __toString() to ensure just that. That being said even there things could get complicated when dealing with translations which might be managed by the object. Additionally there might be cases where one wants a longer representation and others where one needs a shorter one. So the issue of consistent output will likely need to be dealt with in another matter anyway.&lt;/p&gt;
  119.  
  120. &lt;p&gt;The other use case could be for internal purposes like debug messages or logging. Often code deals with generating errors has the non trivial task of figuring out to do with whatever broken information it got. So here it could also be quite useful to be able to just serialize an object and its pertinent aspects with as little knowledge about the objects class as possible. However in the real world I keep seeing debug code that first checks if __toString() is defined and if so its called explicitly and if not some other logic is used to generate a meaningful message. As such __toString() is not really all that magic. It could be any other method name just as well since everything is called manually without some magic interception. It would of course all be different if we could all rely on there being a useful __toString() method for every object and so we could just embed any scalar and object without care into a debug message knowing that it will add a useful representation of the variable into the log message.&lt;/p&gt;
  121.  
  122. &lt;p&gt;I posed this question on twitter and there is a &lt;a href=&quot;https://twitter.com/lsmith/status/372330165600153600&quot;&gt;lively&lt;/a&gt; &lt;a href=&quot;https://twitter.com/lsmith/status/372329871587823618&quot;&gt;discussion&lt;/a&gt; going on there.&lt;/p&gt;
  123.  
  124. &lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt;&lt;br /&gt;
  125. The gist of the comments on twitter is .. people are pretty evenly split between only using __toString() for UI output and only using it for internal purposes.&lt;/p&gt;
  126.  
  127. </content:encoded>
  128.            <pubDate>Tue, 27 Aug 2013 16:17:59 +0200</pubDate>
  129.            <dc:creator>Lukas Kahwe Smith</dc:creator>
  130.        </item>
  131.    </channel>
  132. </rss>

If you would like to create a banner that links to this page (i.e. this validation result), do the following:

  1. Download the "valid RSS" banner.

  2. Upload the image to your own server. (This step is important. Please do not link directly to the image on this server.)

  3. Add this HTML to your page (change the image src attribute if necessary):

If you would like to create a text link instead, here is the URL you can use:

http://www.feedvalidator.org/check.cgi?url=http%3A//pooteeweet.org/rss.xml

Copyright © 2002-9 Sam Ruby, Mark Pilgrim, Joseph Walton, and Phil Ringnalda