Cómo reactivar un poco el mercado inmobiliario

Recientemente leí en el libro La Tercera Alternativa, de Stephen Covey, una cita de una solución de Edward de Bono para reactivar el mencado inmobiliario cuando la tendencia es definitivamente a la baja como ahora.

El problema consiste en que el vendedor quiere vender al precio de hoy, mientras que el comprador sabe que en un breve plazo su vivienda valdrá menos, por tanto no compra. La tercera alternativa de Bono consiste en que se realice la operación a precio de hoy con el compromiso de que el vendedor devuelva al cabo de un tiempo parte del dinero al comprador si el precio ha bajado.

Como punto de partida es buena la solución, sin embargo es poco factible que el vendedor devuelva dinero. A continuación describo una solución que podría funcionar.

El vendedor quiere vender a precio de hoy, digamos 100, y el comprador cree que dentro de un año su casa valdrá menos, digamos 75. Ahora pongamos juntos al comprador, vendedor y un banco, y supongamos que el vendedor está dispuesto a vender a 75 si dentro de un año el precio es de 75 o menos, y que el comprador está dispuesto a comprar a 100 si el precio no llega a 75. La forma de implementar esto consiste en

1.- el banco da una hipoteca al comprador por 100

2.- el vendedor recibe 75

3.- la diferencia se queda en un depósito a plazo fijo de un año

4.- al cabo de un año se liquida el depósito. Si el precio está por encima de 75, el depósito se abona al vendedor; caso contrario se amortiza parcialmente la hipoteca del comprador

El problema principal es disponer de un índice de precios, ajustados a la zona donde está el inmueble, fiable e independiente, que acepten comprador y vendedor. Una posibilidad es que este tipo de operaciones exija el registro de la propiedad junto con la anotación de los precios del contrato y un identificador de zona, de forma que semanal o mensualmente el registro de la propiedad publique de forma automática, con un algoritmo de cálculo público, el precio/m2 u otro índice para cada zona. De ese modo, comprador y vendedor pueden acordar sus precios o el valor del índice futuro para establecer el contrato.

La ventaja de este tipo de acuerdo es que el vendedor no debe temer fijar un precio bajo, ya que si no llega el cobrará el 100% al cabo de un año. El comprador tiene la esperanza de que quizás pueda comprar a un precio significativamente más bajo. Y el banco consigue un depósito por la diferencia a un año; si el comprador es el beneficiado, amortiza parcialmente la hipoteca y disminuye el riego.


Spring and Java trends

I've been using Spring for years. It's been one of those technologies that work. Learning curve was very smooth. It included some surprising technologies like AspectJ that enabled Spring to make very useful things like transactions in a transparent way.

One of the main statements of Spring was to to be no invasive: to keep source code unaware as possible of the existence of Spring, unaware as possible of the service implementations.

Spring configuration was made using XML files, which I think was a good choice IMO, because there is no confusion when you are dealing with Spring and when you are dealing with your application. Besides, it's a one point repository of configuration.

That solid base was broken with Spring Framework 3.0 and it doesn't even exist in projects Spring Roo and Data Graph. There is a Java trend named Annotations. I think they are good when used with measure. The issue is when everything is done with annotations. The annotations may become into a nightmare. Some annotations create a proxy, another are for CDI checks, another for Hibernate column info, another for transactions or Spring stereotypes. Most of them are Spring artifacts that make the project absolutely Spring invasive and Spring dependant. Who cares a class be a Spring @Service of Spring @Repository? Those annotations are out of the project domain.

The worst is the extensive use of AJ in Spring Data Graph and Roo to create methods in the class file that not exist in source files. Those projects force to use Eclipse and STS plugin, which means not only library dependencies, it means code writing dependency on Spring. IntelliJ support for AJ is poor and it does not exist in Netbeans, but who cares? NB now is a loser thanks to Oracle.

It seems Spring team wants to do funny things, xtreme coding. I think they are missing the target in some subjects.

Websocket strategies

 This page is really good about #websockets. It lists current technology servers like jWebSocket, Kaazing and others.

I think all those technologies think in keeping current http dominance. They don't think neither in replacing http nor in make websockets transparent to developers. That strategy doesn't help to increase developer productivity; on the contrary, they add complexity to projects. It's easier and simpler to put the browser as a remote renderer of the application session and make all that transparent to application developer.

You can read my point of view in this entries:

Websockets: a new hope for RIAs

Websocket bridge detachable sessions

Websocket bridge: detachable sessions

Yesterday I wrote a blog entry about a new scent for RIAs. "A new hope" for the end of the http empire defeated by the websocket rebel aliance.

A websocket bridge could support session detach and reattach. This is useful in three cases:

  1. when there is comm failure. No time is needed to go again to the current page
  2. when the user wants to be detached fom the session to be attached later
  3. when theapplication is a portlet in a page. Navigation to another portal page may cause the session and, but navigation back will cause the websocket bridged portlet application to be back at the same page.
Websockets: A new hope for RIA

There is some buzz about next arrival of Websockets as a standard. Kaazing is doing a very good job. But most of presentations and articles say websockets is a good helper for today technologies like JSF, GWT, STRUTS, etc. I disagree. I think WEBSOCKET IS MAINLY A JSP, JSF, GWT, STRUTS KILLER for rich internet applications (RIA). This paper explains how it can be done. And you will recognize most technologies are already available.

Screen design in client-server applications is straightforward. The IDE of choice has a visual designer and component libraries; it helps to link client events/actions to server procedures . The framework handles everything so serverside changes are reflected in the client screen in a seamless transparent way. Why RIAs must be different? The reason why is current RIAs run on the simple, slow, unidirectional, half duplex and olf fashioned HTTP protocol.

Websockets protocol is tightly integrated with HTTP so it can traverse firewalls, but it enables TCP connections between the browser and the server in full duplex to exchange binary content. So, what can websockets do for us? In short, use HTTP initial query to load some javascript in the browser, establish a websocket connection and put the browser to be remotely controlled by the server application, having all the data transfered through the websocket connection.

One of the original web server statements was: the server must not save any session information. Http queries are stateless from the server point of view. All the session data are stored in the browser and every query sends data to the server. Saving all sessions in the server may require a lot of space for broad web applications. Two MB every session, 10.000 concurrent sessions means 20 GB. But that are not the RIA scenarios. Companies may have hundreds of concurrent sessions, maybe thousands, but in such cases there is no question to put more servers in place as it runs enterprise core applications.

Anyway, today technology has evolved and usually servers save all active session data. If you remember, JSP, JSF, etc., are technologies that brigde the browser and the server to handle http get and post actions and link that action with the server saved session. Thay have to create views, apply changes, invoke event handlers, etc. If no get and post actions are needed, data exchange and event invocation is quicker. As data serialization between browser and application is the main task of a websocket bridge, that bunch of http bridge technologies is no longer needed.

All those technologies add lots of artifacts, workarounds and time consuming tasks that make RIA design very expensive. We need true RAD tools for RIA and websockets is the transport for them. RIAs need to show the desired page in the browser whenever the server wants and send user events to the server whenever they happen and with the lowest latency possible.

This is my wish list for a websocket based bridge for RIA:

  • must handle transparently any difference between browsers: DOM differences, CSS differences, those annoying pixels, etc. The application should never need to know which brand browser is that session running on

  • must support compressed and uncompressed data flow; the bridge should evaluate if it's better compression or not; overridable by application

  • must support portlets, i.e. multiple portlets based on the same or different websocket bridge brand and version. Portal integration (portal bridge) must be included from the very beginning

  • there should be component packages selectable and configurable by the application

  • layouts, sizes and styles can be set at the beginning programatically or by loading a text configuration file, or programatically generated by the visual IDE. If we want RAD, a visual IDE is a must

  • layouts, sizes and styles set by the application must be understable by human beings, what excludes the most part of CSS attributes and weird artifacts. In fact layouts sizes and styles will be translated to HTML elements and CSS styles depending on the atual browser brand and version. It must be clear at any moment that component styles are not CSS styles

  • there should also be effects libraries, non visual components and even composite components to be used in the browser

  • bridge should work with clusters.

  • resources can be supplied using http

  • websocket transport facility should be helped by two services:

    • auto reconnection: if websocket connection is lost the reconnection service will try to reestablish the link. This service could be configurable to choose reconnection mode as: automatic, when new data to be sent to the application, or on user demand with a popup panel

    • message multiplexing with QoS: sometimes more than one thread need to send data to the other end, and those data may have higher priority than usual. If a message exceeds a length, it will be cut into pieces. When a message piece has been sent the queue of pieces is scanned to determine which will be the next piece to transmit according to message priorities. Usually browser events shoud have high priority, file upload should have low priority. Anyway, each component, each DOM branch have a default priority to be sent to the browser and each component and browser event have a default priority to be sent to the application. But those priorities could be overriden by the application. In this way we can achive low application latency even when data flow between browser and server is high.

  • The bridge and component libraries should have support for i18n in a transparent way. Language change can happen at any moment and the browser should show texts, formats and other localized resources by automatic translation built in the bridge and components. Application localization should'n need any developer supplied listener in the 99% of the localizaton changes

  • bridge and component libraries should support Spring framework integration from the very beginning. Bean injection and expression evaluation should work in the application and in every component and in the bridge itself. Spring expression support is very helpful for i18n support

  • to keep alive http sessions, browser side bridge will make http/https void requests every minute or so

The first http get should load a javascript bootstrap to evaluate the browser brand, version, capabilities, portletized state if available, other loaded javascript libraries, previous session cookie, create a websocket connection with the application and send it all that information. With it, the application can attach the browser to a previous non expired bridge session or create a new one and configure the bridge for it.

Once configured, the session is initialized, browser specific javascript is sent to the browser, if required, through the websocket connection. All content is always sent through the websocket. Then first page is created as a set of component instances and layouts in the server by the application. Translation to html components, styles, layouts and scripts is made by the bridge with the component library support and sent to the browser. I think no xhtml should be sent. Probably json be quicker to send and to handle in the browser. How to map elements, attributes, styles, etc in the data flow to actual DOM is up to the bridge. As far as the client and server sides of the bridge understand themselves, it's better not to define an standard.

Application listener attaching at the server side cause the bridge create all javascript required in the browser, component-event naming and mapping in the server side so events in the browser activate listeners at the server side.

Any changes in the server side components cause sending of html deltas to the browser. Bridge should support some kind of transactions, so if there is a great number of changes in the view or if it's possible there is an exception, changes can be rolled back (server view components changes be rolled back). This is useful also when the number of changes is so great the bridge decide it is better to send a full branch of the DOM instead of delete/add inner elements.

The bridge should have a pluggable architecture. Browser plugins add browser support to the bridge. Once added the bridge can select the best fit for the new session, but the application can override selection. Browser plugins generate html components, layouts, styles and scripts for that brand and version.

Another kind of plugins should be component libraries. Component libraries may depend on javascript libraries. If more than one component library is used in the same application, their javascript libraries must be compatible. If application is in a portal, be sure javacript libraries are compatible with those used by the portal.

I am not saying http is dead, but it is a very simple protocol and badly suited for RIA. Let's stop losing time with http and all those twisted technologies.

There is a much better life for RIA beyond http. Websockets is the new holy grail for RIAs. But let's do it as wise people: let's develop a RAD tool with a websocket bridge and visual IDE. Let`s do it well the first time.

Mostrando 5 resultados.